00:00:00.001 Started by upstream project "autotest-per-patch" build number 132383 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.049 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.051 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.092 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.273 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.273 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.683 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.695 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.705 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.705 > git config core.sparsecheckout # timeout=10 00:00:04.717 > git read-tree -mu HEAD # timeout=10 00:00:04.731 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.756 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.756 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.839 [Pipeline] Start of Pipeline 00:00:04.853 [Pipeline] library 00:00:04.856 Loading library shm_lib@master 00:00:04.856 Library shm_lib@master is cached. Copying from home. 00:00:04.875 [Pipeline] node 00:00:04.897 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.899 [Pipeline] { 00:00:04.909 [Pipeline] catchError 00:00:04.910 [Pipeline] { 00:00:04.923 [Pipeline] wrap 00:00:04.933 [Pipeline] { 00:00:04.942 [Pipeline] stage 00:00:04.944 [Pipeline] { (Prologue) 00:00:05.147 [Pipeline] sh 00:00:05.432 + logger -p user.info -t JENKINS-CI 00:00:05.451 [Pipeline] echo 00:00:05.453 Node: WFP8 00:00:05.461 [Pipeline] sh 00:00:05.759 [Pipeline] setCustomBuildProperty 00:00:05.770 [Pipeline] echo 00:00:05.772 Cleanup processes 00:00:05.777 [Pipeline] sh 00:00:06.077 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.077 164112 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.125 [Pipeline] sh 00:00:06.407 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.407 ++ grep -v 'sudo pgrep' 00:00:06.407 ++ awk '{print $1}' 00:00:06.407 + sudo kill -9 00:00:06.407 + true 00:00:06.421 [Pipeline] cleanWs 00:00:06.431 [WS-CLEANUP] Deleting project workspace... 00:00:06.431 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.437 [WS-CLEANUP] done 00:00:06.441 [Pipeline] setCustomBuildProperty 00:00:06.457 [Pipeline] sh 00:00:06.741 + sudo git config --global --replace-all safe.directory '*' 00:00:06.827 [Pipeline] httpRequest 00:00:07.633 [Pipeline] echo 00:00:07.635 Sorcerer 10.211.164.20 is alive 00:00:07.645 [Pipeline] retry 00:00:07.648 [Pipeline] { 00:00:07.662 [Pipeline] httpRequest 00:00:07.666 HttpMethod: GET 00:00:07.667 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.667 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.680 Response Code: HTTP/1.1 200 OK 00:00:07.680 Success: Status code 200 is in the accepted range: 200,404 00:00:07.680 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.541 [Pipeline] } 00:00:17.559 [Pipeline] // retry 00:00:17.566 [Pipeline] sh 00:00:17.859 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.895 [Pipeline] httpRequest 00:00:18.557 [Pipeline] echo 00:00:18.559 Sorcerer 10.211.164.20 is alive 00:00:18.569 [Pipeline] retry 00:00:18.571 [Pipeline] { 00:00:18.585 [Pipeline] httpRequest 00:00:18.589 HttpMethod: GET 00:00:18.590 URL: http://10.211.164.20/packages/spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:00:18.590 Sending request to url: http://10.211.164.20/packages/spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:00:18.603 Response Code: HTTP/1.1 200 OK 00:00:18.604 Success: Status code 200 is in the accepted range: 200,404 00:00:18.604 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:01:39.395 [Pipeline] } 00:01:39.413 [Pipeline] // retry 00:01:39.420 [Pipeline] sh 00:01:39.705 + tar --no-same-owner -xf spdk_0383e688b2626e5f24bd789be9d7084d3cd6bdef.tar.gz 00:01:42.251 [Pipeline] sh 00:01:42.535 + git -C spdk log --oneline -n5 00:01:42.535 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:01:42.535 a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:01:42.535 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:01:42.535 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:01:42.535 bb53e3ad9 test/nvme/xnvme: Drop null_blk 00:01:42.546 [Pipeline] } 00:01:42.560 [Pipeline] // stage 00:01:42.569 [Pipeline] stage 00:01:42.572 [Pipeline] { (Prepare) 00:01:42.588 [Pipeline] writeFile 00:01:42.603 [Pipeline] sh 00:01:42.886 + logger -p user.info -t JENKINS-CI 00:01:42.898 [Pipeline] sh 00:01:43.183 + logger -p user.info -t JENKINS-CI 00:01:43.195 [Pipeline] sh 00:01:43.479 + cat autorun-spdk.conf 00:01:43.479 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.479 SPDK_TEST_NVMF=1 00:01:43.479 SPDK_TEST_NVME_CLI=1 00:01:43.479 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.479 SPDK_TEST_NVMF_NICS=e810 00:01:43.479 SPDK_TEST_VFIOUSER=1 00:01:43.479 SPDK_RUN_UBSAN=1 00:01:43.479 NET_TYPE=phy 00:01:43.487 RUN_NIGHTLY=0 00:01:43.491 [Pipeline] readFile 00:01:43.515 [Pipeline] withEnv 00:01:43.517 [Pipeline] { 00:01:43.529 [Pipeline] sh 00:01:43.824 + set -ex 00:01:43.824 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:43.824 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.824 ++ SPDK_TEST_NVMF=1 00:01:43.824 ++ SPDK_TEST_NVME_CLI=1 00:01:43.824 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.824 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.824 ++ SPDK_TEST_VFIOUSER=1 00:01:43.824 ++ SPDK_RUN_UBSAN=1 00:01:43.824 ++ NET_TYPE=phy 00:01:43.824 ++ RUN_NIGHTLY=0 00:01:43.824 + case $SPDK_TEST_NVMF_NICS in 00:01:43.824 + DRIVERS=ice 00:01:43.825 + [[ tcp == \r\d\m\a ]] 00:01:43.825 + [[ -n ice ]] 00:01:43.825 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:43.825 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:43.825 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:43.825 rmmod: ERROR: Module irdma is not currently loaded 00:01:43.825 rmmod: ERROR: Module i40iw is not currently loaded 00:01:43.825 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:43.825 + true 00:01:43.825 + for D in $DRIVERS 00:01:43.825 + sudo modprobe ice 00:01:43.825 + exit 0 00:01:43.834 [Pipeline] } 00:01:43.847 [Pipeline] // withEnv 00:01:43.851 [Pipeline] } 00:01:43.864 [Pipeline] // stage 00:01:43.872 [Pipeline] catchError 00:01:43.873 [Pipeline] { 00:01:43.886 [Pipeline] timeout 00:01:43.886 Timeout set to expire in 1 hr 0 min 00:01:43.887 [Pipeline] { 00:01:43.900 [Pipeline] stage 00:01:43.902 [Pipeline] { (Tests) 00:01:43.915 [Pipeline] sh 00:01:44.200 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.201 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.201 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.201 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:44.201 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.201 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.201 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:44.201 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.201 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.201 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.201 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:44.201 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.201 + source /etc/os-release 00:01:44.201 ++ NAME='Fedora Linux' 00:01:44.201 ++ VERSION='39 (Cloud Edition)' 00:01:44.201 ++ ID=fedora 00:01:44.201 ++ VERSION_ID=39 00:01:44.201 ++ VERSION_CODENAME= 00:01:44.201 ++ PLATFORM_ID=platform:f39 00:01:44.201 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:44.201 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.201 ++ LOGO=fedora-logo-icon 00:01:44.201 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:44.201 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.201 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:44.201 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.201 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.201 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.201 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:44.201 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.201 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:44.201 ++ SUPPORT_END=2024-11-12 00:01:44.201 ++ VARIANT='Cloud Edition' 00:01:44.201 ++ VARIANT_ID=cloud 00:01:44.201 + uname -a 00:01:44.201 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:44.201 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:46.741 Hugepages 00:01:46.741 node hugesize free / total 00:01:46.741 node0 1048576kB 0 / 0 00:01:46.741 node0 2048kB 0 / 0 00:01:46.741 node1 1048576kB 0 / 0 00:01:46.741 node1 2048kB 0 / 0 00:01:46.741 00:01:46.741 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.741 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:46.741 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:46.741 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:46.741 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:46.741 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:46.741 + rm -f /tmp/spdk-ld-path 00:01:46.741 + source autorun-spdk.conf 00:01:46.741 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.741 ++ SPDK_TEST_NVMF=1 00:01:46.741 ++ SPDK_TEST_NVME_CLI=1 00:01:46.741 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.741 ++ SPDK_TEST_NVMF_NICS=e810 00:01:46.741 ++ SPDK_TEST_VFIOUSER=1 00:01:46.741 ++ SPDK_RUN_UBSAN=1 00:01:46.741 ++ NET_TYPE=phy 00:01:46.741 ++ RUN_NIGHTLY=0 00:01:46.741 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.741 + [[ -n '' ]] 00:01:46.741 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.741 + for M in /var/spdk/build-*-manifest.txt 00:01:46.741 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:46.741 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.741 + for M in /var/spdk/build-*-manifest.txt 00:01:46.741 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.741 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.741 + for M in /var/spdk/build-*-manifest.txt 00:01:46.741 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.741 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.741 ++ uname 00:01:46.741 + [[ Linux == \L\i\n\u\x ]] 00:01:46.741 + sudo dmesg -T 00:01:47.001 + sudo dmesg --clear 00:01:47.001 + dmesg_pid=165037 00:01:47.001 + [[ Fedora Linux == FreeBSD ]] 00:01:47.001 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.001 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.001 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.001 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.001 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.001 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.001 + sudo dmesg -Tw 00:01:47.001 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.001 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.001 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.001 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.001 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.001 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.001 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.001 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.002 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.002 12:11:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.002 12:11:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:47.002 12:11:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:47.002 12:11:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.002 12:11:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.002 12:11:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.002 12:11:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.002 12:11:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.002 12:11:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.002 12:11:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.002 12:11:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.002 12:11:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.002 12:11:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.002 12:11:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.002 12:11:30 -- paths/export.sh@5 -- $ export PATH 00:01:47.002 12:11:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.002 12:11:30 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.002 12:11:30 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.002 12:11:30 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101090.XXXXXX 00:01:47.002 12:11:30 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101090.BNZ4t3 00:01:47.002 12:11:30 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.002 12:11:30 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.002 12:11:30 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:47.002 12:11:30 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.002 12:11:30 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.002 12:11:30 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.002 12:11:30 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.002 12:11:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.002 12:11:30 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:47.002 12:11:30 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.002 12:11:30 -- pm/common@17 -- $ local monitor 00:01:47.002 12:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.002 12:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.002 12:11:30 -- pm/common@21 -- $ date +%s 00:01:47.002 12:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.002 12:11:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.002 12:11:30 -- pm/common@21 -- $ date +%s 00:01:47.002 12:11:30 -- pm/common@25 -- $ sleep 1 00:01:47.002 12:11:30 -- pm/common@21 -- $ date +%s 00:01:47.002 12:11:30 -- pm/common@21 -- $ date +%s 00:01:47.002 12:11:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101090 00:01:47.002 12:11:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101090 00:01:47.002 12:11:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101090 00:01:47.002 12:11:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732101090 00:01:47.261 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101090_collect-cpu-load.pm.log 00:01:47.261 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101090_collect-cpu-temp.pm.log 00:01:47.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101090_collect-vmstat.pm.log 00:01:47.262 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732101090_collect-bmc-pm.bmc.pm.log 00:01:48.226 12:11:31 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.226 12:11:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.226 12:11:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.226 12:11:31 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.226 12:11:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.226 Wed Nov 20 11:11:31 AM UTC 2024 00:01:48.226 12:11:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.226 v25.01-pre-213-g0383e688b 00:01:48.226 12:11:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.226 12:11:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.226 12:11:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.226 12:11:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.226 12:11:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.226 12:11:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.226 ************************************ 00:01:48.226 START TEST ubsan 00:01:48.226 ************************************ 00:01:48.226 12:11:31 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.226 using ubsan 00:01:48.226 00:01:48.226 real 0m0.000s 00:01:48.226 user 0m0.000s 00:01:48.226 sys 0m0.000s 00:01:48.226 12:11:31 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.226 12:11:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.226 ************************************ 00:01:48.226 END TEST ubsan 00:01:48.226 ************************************ 00:01:48.226 12:11:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.226 12:11:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.226 12:11:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.226 12:11:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.226 12:11:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.226 12:11:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.226 12:11:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.226 12:11:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.226 12:11:31 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:48.485 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:48.485 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.744 Using 'verbs' RDMA provider 00:02:01.528 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:13.748 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:13.748 Creating mk/config.mk...done. 00:02:13.748 Creating mk/cc.flags.mk...done. 00:02:13.748 Type 'make' to build. 00:02:13.748 12:11:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:13.748 12:11:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:13.748 12:11:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.748 12:11:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.748 ************************************ 00:02:13.748 START TEST make 00:02:13.748 ************************************ 00:02:13.748 12:11:56 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:14.317 make[1]: Nothing to be done for 'all'. 00:02:15.704 The Meson build system 00:02:15.704 Version: 1.5.0 00:02:15.704 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:15.704 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.704 Build type: native build 00:02:15.704 Project name: libvfio-user 00:02:15.704 Project version: 0.0.1 00:02:15.704 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.704 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.704 Host machine cpu family: x86_64 00:02:15.704 Host machine cpu: x86_64 00:02:15.704 Run-time dependency threads found: YES 00:02:15.704 Library dl found: YES 00:02:15.704 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.704 Run-time dependency json-c found: YES 0.17 00:02:15.704 Run-time dependency cmocka found: YES 1.1.7 00:02:15.704 Program pytest-3 found: NO 00:02:15.704 Program flake8 found: NO 00:02:15.704 Program misspell-fixer found: NO 00:02:15.704 Program restructuredtext-lint found: NO 00:02:15.704 Program valgrind found: YES (/usr/bin/valgrind) 00:02:15.704 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.704 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.704 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.704 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:15.704 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:15.704 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:15.704 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:15.704 Build targets in project: 8 00:02:15.704 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:15.704 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:15.704 00:02:15.704 libvfio-user 0.0.1 00:02:15.704 00:02:15.704 User defined options 00:02:15.704 buildtype : debug 00:02:15.704 default_library: shared 00:02:15.704 libdir : /usr/local/lib 00:02:15.704 00:02:15.704 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.281 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.281 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:16.281 [2/37] Compiling C object samples/null.p/null.c.o 00:02:16.281 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:16.281 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:16.281 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:16.281 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:16.281 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:16.281 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:16.281 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:16.281 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:16.281 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:16.281 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:16.281 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:16.281 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:16.281 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:16.281 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:16.281 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:16.281 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:16.281 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:16.281 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:16.281 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:16.281 [22/37] Compiling C object samples/server.p/server.c.o 00:02:16.281 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:16.281 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:16.281 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:16.281 [26/37] Compiling C object samples/client.p/client.c.o 00:02:16.281 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:16.281 [28/37] Linking target samples/client 00:02:16.281 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:16.281 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:16.540 [31/37] Linking target test/unit_tests 00:02:16.540 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:16.540 [33/37] Linking target samples/null 00:02:16.540 [34/37] Linking target samples/server 00:02:16.540 [35/37] Linking target samples/gpio-pci-idio-16 00:02:16.540 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:16.540 [37/37] Linking target samples/lspci 00:02:16.540 INFO: autodetecting backend as ninja 00:02:16.540 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.540 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.107 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:17.107 ninja: no work to do. 00:02:22.376 The Meson build system 00:02:22.376 Version: 1.5.0 00:02:22.376 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:22.376 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:22.376 Build type: native build 00:02:22.376 Program cat found: YES (/usr/bin/cat) 00:02:22.376 Project name: DPDK 00:02:22.376 Project version: 24.03.0 00:02:22.376 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.376 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.376 Host machine cpu family: x86_64 00:02:22.376 Host machine cpu: x86_64 00:02:22.376 Message: ## Building in Developer Mode ## 00:02:22.376 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.377 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.377 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.377 Program python3 found: YES (/usr/bin/python3) 00:02:22.377 Program cat found: YES (/usr/bin/cat) 00:02:22.377 Compiler for C supports arguments -march=native: YES 00:02:22.377 Checking for size of "void *" : 8 00:02:22.377 Checking for size of "void *" : 8 (cached) 00:02:22.377 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:22.377 Library m found: YES 00:02:22.377 Library numa found: YES 00:02:22.377 Has header "numaif.h" : YES 00:02:22.377 Library fdt found: NO 00:02:22.377 Library execinfo found: NO 00:02:22.377 Has header "execinfo.h" : YES 00:02:22.377 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.377 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.377 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.377 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.377 Run-time dependency openssl found: YES 3.1.1 00:02:22.377 Run-time dependency libpcap found: YES 1.10.4 00:02:22.377 Has header "pcap.h" with dependency libpcap: YES 00:02:22.377 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.377 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.377 Compiler for C supports arguments -Wformat: YES 00:02:22.377 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:22.377 Compiler for C supports arguments -Wformat-security: NO 00:02:22.377 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.377 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.377 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.377 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.377 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.377 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.377 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.377 Compiler for C supports arguments -Wundef: YES 00:02:22.377 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.377 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.377 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:22.377 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.377 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:22.377 Program objdump found: YES (/usr/bin/objdump) 00:02:22.377 Compiler for C supports arguments -mavx512f: YES 00:02:22.377 Checking if "AVX512 checking" compiles: YES 00:02:22.377 Fetching value of define "__SSE4_2__" : 1 00:02:22.377 Fetching value of define "__AES__" : 1 00:02:22.377 Fetching value of define "__AVX__" : 1 00:02:22.377 Fetching value of define "__AVX2__" : 1 00:02:22.377 Fetching value of define "__AVX512BW__" : 1 00:02:22.377 Fetching value of define "__AVX512CD__" : 1 00:02:22.377 Fetching value of define "__AVX512DQ__" : 1 00:02:22.377 Fetching value of define "__AVX512F__" : 1 00:02:22.377 Fetching value of define "__AVX512VL__" : 1 00:02:22.377 Fetching value of define "__PCLMUL__" : 1 00:02:22.377 Fetching value of define "__RDRND__" : 1 00:02:22.377 Fetching value of define "__RDSEED__" : 1 00:02:22.377 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.377 Fetching value of define "__znver1__" : (undefined) 00:02:22.377 Fetching value of define "__znver2__" : (undefined) 00:02:22.377 Fetching value of define "__znver3__" : (undefined) 00:02:22.377 Fetching value of define "__znver4__" : (undefined) 00:02:22.377 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:22.377 Message: lib/log: Defining dependency "log" 00:02:22.377 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.377 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.377 Checking for function "getentropy" : NO 00:02:22.377 Message: lib/eal: Defining dependency "eal" 00:02:22.377 Message: lib/ring: Defining dependency "ring" 00:02:22.377 Message: lib/rcu: Defining dependency "rcu" 00:02:22.377 Message: lib/mempool: Defining dependency "mempool" 00:02:22.377 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.377 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.377 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:22.377 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:22.377 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:22.377 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:22.377 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:22.377 Compiler for C supports arguments -mpclmul: YES 00:02:22.377 Compiler for C supports arguments -maes: YES 00:02:22.377 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.377 Compiler for C supports arguments -mavx512bw: YES 00:02:22.377 Compiler for C supports arguments -mavx512dq: YES 00:02:22.377 Compiler for C supports arguments -mavx512vl: YES 00:02:22.377 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.377 Compiler for C supports arguments -mavx2: YES 00:02:22.377 Compiler for C supports arguments -mavx: YES 00:02:22.377 Message: lib/net: Defining dependency "net" 00:02:22.377 Message: lib/meter: Defining dependency "meter" 00:02:22.377 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.377 Message: lib/pci: Defining dependency "pci" 00:02:22.377 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.377 Message: lib/hash: Defining dependency "hash" 00:02:22.377 Message: lib/timer: Defining dependency "timer" 00:02:22.377 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.377 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.377 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.377 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.377 Message: lib/power: Defining dependency "power" 00:02:22.377 Message: lib/reorder: Defining dependency "reorder" 00:02:22.377 Message: lib/security: Defining dependency "security" 00:02:22.377 Has header "linux/userfaultfd.h" : YES 00:02:22.377 Has header "linux/vduse.h" : YES 00:02:22.377 Message: lib/vhost: Defining dependency "vhost" 00:02:22.377 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:22.377 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.377 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.377 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.377 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.377 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.377 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.377 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.377 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.377 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.377 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.377 Configuring doxy-api-html.conf using configuration 00:02:22.377 Configuring doxy-api-man.conf using configuration 00:02:22.377 Program mandb found: YES (/usr/bin/mandb) 00:02:22.377 Program sphinx-build found: NO 00:02:22.377 Configuring rte_build_config.h using configuration 00:02:22.377 Message: 00:02:22.377 ================= 00:02:22.377 Applications Enabled 00:02:22.377 ================= 00:02:22.377 00:02:22.377 apps: 00:02:22.377 00:02:22.377 00:02:22.377 Message: 00:02:22.377 ================= 00:02:22.377 Libraries Enabled 00:02:22.377 ================= 00:02:22.377 00:02:22.377 libs: 00:02:22.377 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.377 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.377 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.377 00:02:22.377 Message: 00:02:22.377 =============== 00:02:22.377 Drivers Enabled 00:02:22.377 =============== 00:02:22.377 00:02:22.377 common: 00:02:22.377 00:02:22.377 bus: 00:02:22.377 pci, vdev, 00:02:22.377 mempool: 00:02:22.377 ring, 00:02:22.377 dma: 00:02:22.377 00:02:22.377 net: 00:02:22.377 00:02:22.377 crypto: 00:02:22.377 00:02:22.377 compress: 00:02:22.377 00:02:22.377 vdpa: 00:02:22.377 00:02:22.377 00:02:22.377 Message: 00:02:22.377 ================= 00:02:22.377 Content Skipped 00:02:22.377 ================= 00:02:22.377 00:02:22.377 apps: 00:02:22.377 dumpcap: explicitly disabled via build config 00:02:22.377 graph: explicitly disabled via build config 00:02:22.377 pdump: explicitly disabled via build config 00:02:22.377 proc-info: explicitly disabled via build config 00:02:22.377 test-acl: explicitly disabled via build config 00:02:22.377 test-bbdev: explicitly disabled via build config 00:02:22.377 test-cmdline: explicitly disabled via build config 00:02:22.377 test-compress-perf: explicitly disabled via build config 00:02:22.377 test-crypto-perf: explicitly disabled via build config 00:02:22.377 test-dma-perf: explicitly disabled via build config 00:02:22.377 test-eventdev: explicitly disabled via build config 00:02:22.377 test-fib: explicitly disabled via build config 00:02:22.377 test-flow-perf: explicitly disabled via build config 00:02:22.377 test-gpudev: explicitly disabled via build config 00:02:22.378 test-mldev: explicitly disabled via build config 00:02:22.378 test-pipeline: explicitly disabled via build config 00:02:22.378 test-pmd: explicitly disabled via build config 00:02:22.378 test-regex: explicitly disabled via build config 00:02:22.378 test-sad: explicitly disabled via build config 00:02:22.378 test-security-perf: explicitly disabled via build config 00:02:22.378 00:02:22.378 libs: 00:02:22.378 argparse: explicitly disabled via build config 00:02:22.378 metrics: explicitly disabled via build config 00:02:22.378 acl: explicitly disabled via build config 00:02:22.378 bbdev: explicitly disabled via build config 00:02:22.378 bitratestats: explicitly disabled via build config 00:02:22.378 bpf: explicitly disabled via build config 00:02:22.378 cfgfile: explicitly disabled via build config 00:02:22.378 distributor: explicitly disabled via build config 00:02:22.378 efd: explicitly disabled via build config 00:02:22.378 eventdev: explicitly disabled via build config 00:02:22.378 dispatcher: explicitly disabled via build config 00:02:22.378 gpudev: explicitly disabled via build config 00:02:22.378 gro: explicitly disabled via build config 00:02:22.378 gso: explicitly disabled via build config 00:02:22.378 ip_frag: explicitly disabled via build config 00:02:22.378 jobstats: explicitly disabled via build config 00:02:22.378 latencystats: explicitly disabled via build config 00:02:22.378 lpm: explicitly disabled via build config 00:02:22.378 member: explicitly disabled via build config 00:02:22.378 pcapng: explicitly disabled via build config 00:02:22.378 rawdev: explicitly disabled via build config 00:02:22.378 regexdev: explicitly disabled via build config 00:02:22.378 mldev: explicitly disabled via build config 00:02:22.378 rib: explicitly disabled via build config 00:02:22.378 sched: explicitly disabled via build config 00:02:22.378 stack: explicitly disabled via build config 00:02:22.378 ipsec: explicitly disabled via build config 00:02:22.378 pdcp: explicitly disabled via build config 00:02:22.378 fib: explicitly disabled via build config 00:02:22.378 port: explicitly disabled via build config 00:02:22.378 pdump: explicitly disabled via build config 00:02:22.378 table: explicitly disabled via build config 00:02:22.378 pipeline: explicitly disabled via build config 00:02:22.378 graph: explicitly disabled via build config 00:02:22.378 node: explicitly disabled via build config 00:02:22.378 00:02:22.378 drivers: 00:02:22.378 common/cpt: not in enabled drivers build config 00:02:22.378 common/dpaax: not in enabled drivers build config 00:02:22.378 common/iavf: not in enabled drivers build config 00:02:22.378 common/idpf: not in enabled drivers build config 00:02:22.378 common/ionic: not in enabled drivers build config 00:02:22.378 common/mvep: not in enabled drivers build config 00:02:22.378 common/octeontx: not in enabled drivers build config 00:02:22.378 bus/auxiliary: not in enabled drivers build config 00:02:22.378 bus/cdx: not in enabled drivers build config 00:02:22.378 bus/dpaa: not in enabled drivers build config 00:02:22.378 bus/fslmc: not in enabled drivers build config 00:02:22.378 bus/ifpga: not in enabled drivers build config 00:02:22.378 bus/platform: not in enabled drivers build config 00:02:22.378 bus/uacce: not in enabled drivers build config 00:02:22.378 bus/vmbus: not in enabled drivers build config 00:02:22.378 common/cnxk: not in enabled drivers build config 00:02:22.378 common/mlx5: not in enabled drivers build config 00:02:22.378 common/nfp: not in enabled drivers build config 00:02:22.378 common/nitrox: not in enabled drivers build config 00:02:22.378 common/qat: not in enabled drivers build config 00:02:22.378 common/sfc_efx: not in enabled drivers build config 00:02:22.378 mempool/bucket: not in enabled drivers build config 00:02:22.378 mempool/cnxk: not in enabled drivers build config 00:02:22.378 mempool/dpaa: not in enabled drivers build config 00:02:22.378 mempool/dpaa2: not in enabled drivers build config 00:02:22.378 mempool/octeontx: not in enabled drivers build config 00:02:22.378 mempool/stack: not in enabled drivers build config 00:02:22.378 dma/cnxk: not in enabled drivers build config 00:02:22.378 dma/dpaa: not in enabled drivers build config 00:02:22.378 dma/dpaa2: not in enabled drivers build config 00:02:22.378 dma/hisilicon: not in enabled drivers build config 00:02:22.378 dma/idxd: not in enabled drivers build config 00:02:22.378 dma/ioat: not in enabled drivers build config 00:02:22.378 dma/skeleton: not in enabled drivers build config 00:02:22.378 net/af_packet: not in enabled drivers build config 00:02:22.378 net/af_xdp: not in enabled drivers build config 00:02:22.378 net/ark: not in enabled drivers build config 00:02:22.378 net/atlantic: not in enabled drivers build config 00:02:22.378 net/avp: not in enabled drivers build config 00:02:22.378 net/axgbe: not in enabled drivers build config 00:02:22.378 net/bnx2x: not in enabled drivers build config 00:02:22.378 net/bnxt: not in enabled drivers build config 00:02:22.378 net/bonding: not in enabled drivers build config 00:02:22.378 net/cnxk: not in enabled drivers build config 00:02:22.378 net/cpfl: not in enabled drivers build config 00:02:22.378 net/cxgbe: not in enabled drivers build config 00:02:22.378 net/dpaa: not in enabled drivers build config 00:02:22.378 net/dpaa2: not in enabled drivers build config 00:02:22.378 net/e1000: not in enabled drivers build config 00:02:22.378 net/ena: not in enabled drivers build config 00:02:22.378 net/enetc: not in enabled drivers build config 00:02:22.378 net/enetfec: not in enabled drivers build config 00:02:22.378 net/enic: not in enabled drivers build config 00:02:22.378 net/failsafe: not in enabled drivers build config 00:02:22.378 net/fm10k: not in enabled drivers build config 00:02:22.378 net/gve: not in enabled drivers build config 00:02:22.378 net/hinic: not in enabled drivers build config 00:02:22.378 net/hns3: not in enabled drivers build config 00:02:22.378 net/i40e: not in enabled drivers build config 00:02:22.378 net/iavf: not in enabled drivers build config 00:02:22.378 net/ice: not in enabled drivers build config 00:02:22.378 net/idpf: not in enabled drivers build config 00:02:22.378 net/igc: not in enabled drivers build config 00:02:22.378 net/ionic: not in enabled drivers build config 00:02:22.378 net/ipn3ke: not in enabled drivers build config 00:02:22.378 net/ixgbe: not in enabled drivers build config 00:02:22.378 net/mana: not in enabled drivers build config 00:02:22.378 net/memif: not in enabled drivers build config 00:02:22.378 net/mlx4: not in enabled drivers build config 00:02:22.378 net/mlx5: not in enabled drivers build config 00:02:22.378 net/mvneta: not in enabled drivers build config 00:02:22.378 net/mvpp2: not in enabled drivers build config 00:02:22.378 net/netvsc: not in enabled drivers build config 00:02:22.378 net/nfb: not in enabled drivers build config 00:02:22.378 net/nfp: not in enabled drivers build config 00:02:22.378 net/ngbe: not in enabled drivers build config 00:02:22.378 net/null: not in enabled drivers build config 00:02:22.378 net/octeontx: not in enabled drivers build config 00:02:22.378 net/octeon_ep: not in enabled drivers build config 00:02:22.378 net/pcap: not in enabled drivers build config 00:02:22.378 net/pfe: not in enabled drivers build config 00:02:22.378 net/qede: not in enabled drivers build config 00:02:22.378 net/ring: not in enabled drivers build config 00:02:22.378 net/sfc: not in enabled drivers build config 00:02:22.378 net/softnic: not in enabled drivers build config 00:02:22.378 net/tap: not in enabled drivers build config 00:02:22.378 net/thunderx: not in enabled drivers build config 00:02:22.378 net/txgbe: not in enabled drivers build config 00:02:22.378 net/vdev_netvsc: not in enabled drivers build config 00:02:22.378 net/vhost: not in enabled drivers build config 00:02:22.378 net/virtio: not in enabled drivers build config 00:02:22.378 net/vmxnet3: not in enabled drivers build config 00:02:22.378 raw/*: missing internal dependency, "rawdev" 00:02:22.378 crypto/armv8: not in enabled drivers build config 00:02:22.378 crypto/bcmfs: not in enabled drivers build config 00:02:22.378 crypto/caam_jr: not in enabled drivers build config 00:02:22.378 crypto/ccp: not in enabled drivers build config 00:02:22.378 crypto/cnxk: not in enabled drivers build config 00:02:22.378 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.378 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.378 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.378 crypto/mlx5: not in enabled drivers build config 00:02:22.378 crypto/mvsam: not in enabled drivers build config 00:02:22.378 crypto/nitrox: not in enabled drivers build config 00:02:22.378 crypto/null: not in enabled drivers build config 00:02:22.378 crypto/octeontx: not in enabled drivers build config 00:02:22.378 crypto/openssl: not in enabled drivers build config 00:02:22.378 crypto/scheduler: not in enabled drivers build config 00:02:22.378 crypto/uadk: not in enabled drivers build config 00:02:22.378 crypto/virtio: not in enabled drivers build config 00:02:22.378 compress/isal: not in enabled drivers build config 00:02:22.378 compress/mlx5: not in enabled drivers build config 00:02:22.378 compress/nitrox: not in enabled drivers build config 00:02:22.378 compress/octeontx: not in enabled drivers build config 00:02:22.378 compress/zlib: not in enabled drivers build config 00:02:22.378 regex/*: missing internal dependency, "regexdev" 00:02:22.378 ml/*: missing internal dependency, "mldev" 00:02:22.378 vdpa/ifc: not in enabled drivers build config 00:02:22.378 vdpa/mlx5: not in enabled drivers build config 00:02:22.378 vdpa/nfp: not in enabled drivers build config 00:02:22.378 vdpa/sfc: not in enabled drivers build config 00:02:22.379 event/*: missing internal dependency, "eventdev" 00:02:22.379 baseband/*: missing internal dependency, "bbdev" 00:02:22.379 gpu/*: missing internal dependency, "gpudev" 00:02:22.379 00:02:22.379 00:02:22.379 Build targets in project: 85 00:02:22.379 00:02:22.379 DPDK 24.03.0 00:02:22.379 00:02:22.379 User defined options 00:02:22.379 buildtype : debug 00:02:22.379 default_library : shared 00:02:22.379 libdir : lib 00:02:22.379 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:22.379 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:22.379 c_link_args : 00:02:22.379 cpu_instruction_set: native 00:02:22.379 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:22.379 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:22.379 enable_docs : false 00:02:22.379 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:22.379 enable_kmods : false 00:02:22.379 max_lcores : 128 00:02:22.379 tests : false 00:02:22.379 00:02:22.379 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.969 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:22.969 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.969 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.969 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.969 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.969 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.969 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.969 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.262 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.262 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.262 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.262 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.262 [12/268] Linking static target lib/librte_kvargs.a 00:02:23.262 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.262 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.262 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.262 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.262 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.262 [18/268] Linking static target lib/librte_log.a 00:02:23.262 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.262 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.262 [21/268] Linking static target lib/librte_pci.a 00:02:23.262 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.262 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.262 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.539 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.539 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.539 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.539 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.539 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.539 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.539 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.539 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.539 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.539 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.539 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.539 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.539 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.539 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.539 [39/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.539 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.539 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.539 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.539 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.539 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.539 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.539 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.539 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.539 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.539 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.539 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.539 [51/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.539 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.539 [53/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.539 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.539 [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.539 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.539 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.539 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.539 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.539 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.539 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.539 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.539 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.539 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.539 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.539 [66/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:23.539 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.539 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.539 [69/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.539 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.539 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.539 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.539 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.539 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.539 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.539 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.539 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.540 [78/268] Linking static target lib/librte_meter.a 00:02:23.540 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.805 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.805 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:23.805 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.805 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.805 [84/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.805 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.805 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.805 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.805 [88/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.805 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.805 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.805 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:23.805 [92/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.805 [93/268] Linking static target lib/librte_ring.a 00:02:23.805 [94/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.805 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.805 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.805 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.805 [98/268] Linking static target lib/librte_telemetry.a 00:02:23.805 [99/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.805 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:23.805 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:23.805 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.805 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.805 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:23.805 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.805 [106/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.805 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.805 [108/268] Linking static target lib/librte_mempool.a 00:02:23.805 [109/268] Linking static target lib/librte_rcu.a 00:02:23.805 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:23.805 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:23.805 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.805 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.805 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:23.805 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.805 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.805 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.805 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.805 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.805 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.805 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:23.805 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.805 [123/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.805 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.805 [125/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.805 [126/268] Linking static target lib/librte_eal.a 00:02:23.805 [127/268] Linking static target lib/librte_net.a 00:02:23.805 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.805 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.805 [130/268] Linking static target lib/librte_cmdline.a 00:02:23.805 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:23.805 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.805 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:23.805 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.805 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.064 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.064 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.064 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.064 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.065 [140/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.065 [141/268] Linking static target lib/librte_mbuf.a 00:02:24.065 [142/268] Linking target lib/librte_log.so.24.1 00:02:24.065 [143/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.065 [144/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.065 [145/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.065 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.065 [147/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.065 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.065 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.065 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.065 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.065 [152/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.065 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.065 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.065 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.065 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.065 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.065 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.065 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.065 [160/268] Linking static target lib/librte_compressdev.a 00:02:24.065 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.065 [162/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.065 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.065 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.065 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.065 [166/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.065 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.065 [168/268] Linking static target lib/librte_reorder.a 00:02:24.065 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.065 [170/268] Linking static target lib/librte_timer.a 00:02:24.065 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.065 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.065 [173/268] Linking static target lib/librte_dmadev.a 00:02:24.065 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.065 [175/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.065 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.324 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.324 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.324 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.324 [180/268] Linking static target lib/librte_security.a 00:02:24.324 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.324 [182/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.324 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.324 [184/268] Linking static target lib/librte_power.a 00:02:24.324 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.324 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.324 [187/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.324 [188/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.324 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.324 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.324 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.324 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.324 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.324 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.324 [195/268] Linking static target drivers/librte_bus_vdev.a 00:02:24.324 [196/268] Linking static target lib/librte_hash.a 00:02:24.324 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.324 [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:24.324 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.324 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.324 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.324 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.584 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.584 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.584 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.584 [206/268] Linking static target drivers/librte_bus_pci.a 00:02:24.584 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.584 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.584 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.584 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:24.584 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.584 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.584 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.584 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.584 [215/268] Linking static target lib/librte_cryptodev.a 00:02:24.584 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.842 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.842 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.842 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.842 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.842 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.842 [222/268] Linking static target lib/librte_ethdev.a 00:02:24.842 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.101 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.101 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.101 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.359 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.293 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.293 [229/268] Linking static target lib/librte_vhost.a 00:02:26.552 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.928 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.231 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.799 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.057 [234/268] Linking target lib/librte_eal.so.24.1 00:02:34.057 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.057 [236/268] Linking target lib/librte_ring.so.24.1 00:02:34.057 [237/268] Linking target lib/librte_timer.so.24.1 00:02:34.057 [238/268] Linking target lib/librte_meter.so.24.1 00:02:34.057 [239/268] Linking target lib/librte_pci.so.24.1 00:02:34.057 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.057 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:34.317 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.317 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.317 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.317 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.317 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.317 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:34.317 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.317 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:34.317 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:34.317 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:34.576 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:34.576 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:34.576 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:34.576 [255/268] Linking target lib/librte_net.so.24.1 00:02:34.576 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:34.576 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:34.576 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:34.836 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:34.836 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:34.836 [261/268] Linking target lib/librte_hash.so.24.1 00:02:34.836 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:34.836 [263/268] Linking target lib/librte_security.so.24.1 00:02:34.836 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:34.836 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:34.836 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.095 [267/268] Linking target lib/librte_power.so.24.1 00:02:35.095 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.095 INFO: autodetecting backend as ninja 00:02:35.095 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:47.307 CC lib/log/log.o 00:02:47.307 CC lib/ut_mock/mock.o 00:02:47.307 CC lib/log/log_flags.o 00:02:47.307 CC lib/log/log_deprecated.o 00:02:47.307 CC lib/ut/ut.o 00:02:47.307 LIB libspdk_ut.a 00:02:47.307 LIB libspdk_ut_mock.a 00:02:47.307 LIB libspdk_log.a 00:02:47.307 SO libspdk_ut.so.2.0 00:02:47.307 SO libspdk_ut_mock.so.6.0 00:02:47.307 SO libspdk_log.so.7.1 00:02:47.307 SYMLINK libspdk_ut_mock.so 00:02:47.307 SYMLINK libspdk_ut.so 00:02:47.307 SYMLINK libspdk_log.so 00:02:47.307 CXX lib/trace_parser/trace.o 00:02:47.307 CC lib/dma/dma.o 00:02:47.307 CC lib/ioat/ioat.o 00:02:47.307 CC lib/util/base64.o 00:02:47.307 CC lib/util/bit_array.o 00:02:47.307 CC lib/util/cpuset.o 00:02:47.307 CC lib/util/crc16.o 00:02:47.307 CC lib/util/crc32.o 00:02:47.307 CC lib/util/crc32c.o 00:02:47.307 CC lib/util/crc32_ieee.o 00:02:47.307 CC lib/util/crc64.o 00:02:47.307 CC lib/util/dif.o 00:02:47.307 CC lib/util/fd.o 00:02:47.307 CC lib/util/fd_group.o 00:02:47.307 CC lib/util/file.o 00:02:47.307 CC lib/util/hexlify.o 00:02:47.307 CC lib/util/iov.o 00:02:47.307 CC lib/util/math.o 00:02:47.307 CC lib/util/net.o 00:02:47.307 CC lib/util/pipe.o 00:02:47.307 CC lib/util/strerror_tls.o 00:02:47.307 CC lib/util/string.o 00:02:47.307 CC lib/util/uuid.o 00:02:47.307 CC lib/util/xor.o 00:02:47.307 CC lib/util/zipf.o 00:02:47.307 CC lib/util/md5.o 00:02:47.307 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.307 CC lib/vfio_user/host/vfio_user.o 00:02:47.307 LIB libspdk_dma.a 00:02:47.307 SO libspdk_dma.so.5.0 00:02:47.307 LIB libspdk_ioat.a 00:02:47.307 SYMLINK libspdk_dma.so 00:02:47.307 SO libspdk_ioat.so.7.0 00:02:47.307 SYMLINK libspdk_ioat.so 00:02:47.307 LIB libspdk_vfio_user.a 00:02:47.308 SO libspdk_vfio_user.so.5.0 00:02:47.308 LIB libspdk_util.a 00:02:47.308 SYMLINK libspdk_vfio_user.so 00:02:47.308 SO libspdk_util.so.10.1 00:02:47.308 SYMLINK libspdk_util.so 00:02:47.308 LIB libspdk_trace_parser.a 00:02:47.308 SO libspdk_trace_parser.so.6.0 00:02:47.308 SYMLINK libspdk_trace_parser.so 00:02:47.308 CC lib/rdma_utils/rdma_utils.o 00:02:47.308 CC lib/json/json_parse.o 00:02:47.308 CC lib/json/json_util.o 00:02:47.308 CC lib/json/json_write.o 00:02:47.308 CC lib/env_dpdk/env.o 00:02:47.308 CC lib/env_dpdk/memory.o 00:02:47.308 CC lib/vmd/vmd.o 00:02:47.308 CC lib/vmd/led.o 00:02:47.308 CC lib/conf/conf.o 00:02:47.308 CC lib/env_dpdk/pci.o 00:02:47.308 CC lib/env_dpdk/init.o 00:02:47.308 CC lib/env_dpdk/threads.o 00:02:47.308 CC lib/idxd/idxd.o 00:02:47.308 CC lib/env_dpdk/pci_ioat.o 00:02:47.308 CC lib/idxd/idxd_user.o 00:02:47.308 CC lib/env_dpdk/pci_virtio.o 00:02:47.308 CC lib/idxd/idxd_kernel.o 00:02:47.308 CC lib/env_dpdk/pci_vmd.o 00:02:47.308 CC lib/env_dpdk/pci_idxd.o 00:02:47.308 CC lib/env_dpdk/pci_event.o 00:02:47.308 CC lib/env_dpdk/sigbus_handler.o 00:02:47.308 CC lib/env_dpdk/pci_dpdk.o 00:02:47.308 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.308 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.308 LIB libspdk_conf.a 00:02:47.308 LIB libspdk_rdma_utils.a 00:02:47.308 SO libspdk_conf.so.6.0 00:02:47.308 LIB libspdk_json.a 00:02:47.308 SO libspdk_rdma_utils.so.1.0 00:02:47.308 SO libspdk_json.so.6.0 00:02:47.308 SYMLINK libspdk_conf.so 00:02:47.308 SYMLINK libspdk_rdma_utils.so 00:02:47.308 SYMLINK libspdk_json.so 00:02:47.567 LIB libspdk_idxd.a 00:02:47.567 LIB libspdk_vmd.a 00:02:47.567 SO libspdk_idxd.so.12.1 00:02:47.567 SO libspdk_vmd.so.6.0 00:02:47.567 SYMLINK libspdk_idxd.so 00:02:47.567 SYMLINK libspdk_vmd.so 00:02:47.567 CC lib/rdma_provider/common.o 00:02:47.567 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:47.825 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.825 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.825 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.825 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.825 LIB libspdk_rdma_provider.a 00:02:47.825 SO libspdk_rdma_provider.so.7.0 00:02:47.825 LIB libspdk_jsonrpc.a 00:02:47.825 SYMLINK libspdk_rdma_provider.so 00:02:48.084 SO libspdk_jsonrpc.so.6.0 00:02:48.084 SYMLINK libspdk_jsonrpc.so 00:02:48.084 LIB libspdk_env_dpdk.a 00:02:48.084 SO libspdk_env_dpdk.so.15.1 00:02:48.343 SYMLINK libspdk_env_dpdk.so 00:02:48.343 CC lib/rpc/rpc.o 00:02:48.603 LIB libspdk_rpc.a 00:02:48.603 SO libspdk_rpc.so.6.0 00:02:48.603 SYMLINK libspdk_rpc.so 00:02:48.863 CC lib/trace/trace.o 00:02:48.863 CC lib/trace/trace_flags.o 00:02:48.863 CC lib/keyring/keyring.o 00:02:48.863 CC lib/trace/trace_rpc.o 00:02:48.863 CC lib/keyring/keyring_rpc.o 00:02:48.863 CC lib/notify/notify.o 00:02:48.863 CC lib/notify/notify_rpc.o 00:02:49.122 LIB libspdk_notify.a 00:02:49.122 SO libspdk_notify.so.6.0 00:02:49.122 LIB libspdk_keyring.a 00:02:49.122 LIB libspdk_trace.a 00:02:49.122 SO libspdk_keyring.so.2.0 00:02:49.122 SYMLINK libspdk_notify.so 00:02:49.122 SO libspdk_trace.so.11.0 00:02:49.381 SYMLINK libspdk_keyring.so 00:02:49.381 SYMLINK libspdk_trace.so 00:02:49.641 CC lib/thread/thread.o 00:02:49.641 CC lib/thread/iobuf.o 00:02:49.641 CC lib/sock/sock.o 00:02:49.641 CC lib/sock/sock_rpc.o 00:02:49.900 LIB libspdk_sock.a 00:02:49.900 SO libspdk_sock.so.10.0 00:02:49.900 SYMLINK libspdk_sock.so 00:02:50.467 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:50.467 CC lib/nvme/nvme_ctrlr.o 00:02:50.467 CC lib/nvme/nvme_fabric.o 00:02:50.467 CC lib/nvme/nvme_ns_cmd.o 00:02:50.467 CC lib/nvme/nvme_ns.o 00:02:50.468 CC lib/nvme/nvme_pcie_common.o 00:02:50.468 CC lib/nvme/nvme_pcie.o 00:02:50.468 CC lib/nvme/nvme_qpair.o 00:02:50.468 CC lib/nvme/nvme.o 00:02:50.468 CC lib/nvme/nvme_quirks.o 00:02:50.468 CC lib/nvme/nvme_transport.o 00:02:50.468 CC lib/nvme/nvme_discovery.o 00:02:50.468 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:50.468 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:50.468 CC lib/nvme/nvme_tcp.o 00:02:50.468 CC lib/nvme/nvme_opal.o 00:02:50.468 CC lib/nvme/nvme_io_msg.o 00:02:50.468 CC lib/nvme/nvme_poll_group.o 00:02:50.468 CC lib/nvme/nvme_zns.o 00:02:50.468 CC lib/nvme/nvme_stubs.o 00:02:50.468 CC lib/nvme/nvme_auth.o 00:02:50.468 CC lib/nvme/nvme_cuse.o 00:02:50.468 CC lib/nvme/nvme_vfio_user.o 00:02:50.468 CC lib/nvme/nvme_rdma.o 00:02:50.726 LIB libspdk_thread.a 00:02:50.726 SO libspdk_thread.so.11.0 00:02:50.727 SYMLINK libspdk_thread.so 00:02:50.986 CC lib/vfu_tgt/tgt_endpoint.o 00:02:50.986 CC lib/vfu_tgt/tgt_rpc.o 00:02:50.986 CC lib/init/subsystem_rpc.o 00:02:50.986 CC lib/init/json_config.o 00:02:50.986 CC lib/init/subsystem.o 00:02:50.986 CC lib/init/rpc.o 00:02:50.986 CC lib/virtio/virtio.o 00:02:50.986 CC lib/fsdev/fsdev.o 00:02:50.986 CC lib/fsdev/fsdev_io.o 00:02:50.986 CC lib/virtio/virtio_vfio_user.o 00:02:50.986 CC lib/fsdev/fsdev_rpc.o 00:02:50.986 CC lib/virtio/virtio_vhost_user.o 00:02:50.986 CC lib/virtio/virtio_pci.o 00:02:50.986 CC lib/accel/accel.o 00:02:50.986 CC lib/accel/accel_rpc.o 00:02:50.986 CC lib/accel/accel_sw.o 00:02:50.986 CC lib/blob/blobstore.o 00:02:50.986 CC lib/blob/request.o 00:02:50.986 CC lib/blob/zeroes.o 00:02:50.986 CC lib/blob/blob_bs_dev.o 00:02:51.244 LIB libspdk_init.a 00:02:51.244 LIB libspdk_vfu_tgt.a 00:02:51.244 SO libspdk_init.so.6.0 00:02:51.503 SO libspdk_vfu_tgt.so.3.0 00:02:51.503 LIB libspdk_virtio.a 00:02:51.503 SYMLINK libspdk_init.so 00:02:51.503 SO libspdk_virtio.so.7.0 00:02:51.503 SYMLINK libspdk_vfu_tgt.so 00:02:51.503 SYMLINK libspdk_virtio.so 00:02:51.503 LIB libspdk_fsdev.a 00:02:51.503 SO libspdk_fsdev.so.2.0 00:02:51.761 SYMLINK libspdk_fsdev.so 00:02:51.761 CC lib/event/app.o 00:02:51.761 CC lib/event/reactor.o 00:02:51.761 CC lib/event/log_rpc.o 00:02:51.761 CC lib/event/app_rpc.o 00:02:51.761 CC lib/event/scheduler_static.o 00:02:52.020 LIB libspdk_accel.a 00:02:52.020 SO libspdk_accel.so.16.0 00:02:52.020 LIB libspdk_nvme.a 00:02:52.020 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:52.020 SYMLINK libspdk_accel.so 00:02:52.020 LIB libspdk_event.a 00:02:52.020 SO libspdk_nvme.so.15.0 00:02:52.020 SO libspdk_event.so.14.0 00:02:52.278 SYMLINK libspdk_event.so 00:02:52.278 SYMLINK libspdk_nvme.so 00:02:52.278 CC lib/bdev/bdev.o 00:02:52.278 CC lib/bdev/bdev_rpc.o 00:02:52.278 CC lib/bdev/bdev_zone.o 00:02:52.278 CC lib/bdev/part.o 00:02:52.278 CC lib/bdev/scsi_nvme.o 00:02:52.536 LIB libspdk_fuse_dispatcher.a 00:02:52.536 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.536 SYMLINK libspdk_fuse_dispatcher.so 00:02:53.472 LIB libspdk_blob.a 00:02:53.472 SO libspdk_blob.so.11.0 00:02:53.472 SYMLINK libspdk_blob.so 00:02:53.731 CC lib/lvol/lvol.o 00:02:53.731 CC lib/blobfs/blobfs.o 00:02:53.731 CC lib/blobfs/tree.o 00:02:54.299 LIB libspdk_bdev.a 00:02:54.299 SO libspdk_bdev.so.17.0 00:02:54.299 LIB libspdk_blobfs.a 00:02:54.299 SYMLINK libspdk_bdev.so 00:02:54.299 SO libspdk_blobfs.so.10.0 00:02:54.299 LIB libspdk_lvol.a 00:02:54.299 SYMLINK libspdk_blobfs.so 00:02:54.299 SO libspdk_lvol.so.10.0 00:02:54.557 SYMLINK libspdk_lvol.so 00:02:54.557 CC lib/nvmf/ctrlr.o 00:02:54.557 CC lib/nvmf/ctrlr_discovery.o 00:02:54.557 CC lib/nvmf/ctrlr_bdev.o 00:02:54.557 CC lib/scsi/dev.o 00:02:54.557 CC lib/scsi/lun.o 00:02:54.557 CC lib/nvmf/subsystem.o 00:02:54.557 CC lib/scsi/port.o 00:02:54.557 CC lib/nvmf/nvmf.o 00:02:54.557 CC lib/nbd/nbd.o 00:02:54.557 CC lib/scsi/scsi_bdev.o 00:02:54.557 CC lib/scsi/scsi.o 00:02:54.557 CC lib/ublk/ublk.o 00:02:54.557 CC lib/nvmf/nvmf_rpc.o 00:02:54.557 CC lib/ftl/ftl_core.o 00:02:54.557 CC lib/nbd/nbd_rpc.o 00:02:54.557 CC lib/ftl/ftl_init.o 00:02:54.557 CC lib/ublk/ublk_rpc.o 00:02:54.557 CC lib/nvmf/transport.o 00:02:54.557 CC lib/scsi/scsi_pr.o 00:02:54.557 CC lib/nvmf/tcp.o 00:02:54.557 CC lib/scsi/scsi_rpc.o 00:02:54.557 CC lib/ftl/ftl_layout.o 00:02:54.557 CC lib/ftl/ftl_debug.o 00:02:54.557 CC lib/scsi/task.o 00:02:54.557 CC lib/nvmf/stubs.o 00:02:54.557 CC lib/ftl/ftl_io.o 00:02:54.557 CC lib/nvmf/mdns_server.o 00:02:54.557 CC lib/ftl/ftl_sb.o 00:02:54.557 CC lib/ftl/ftl_l2p.o 00:02:54.557 CC lib/nvmf/vfio_user.o 00:02:54.557 CC lib/nvmf/rdma.o 00:02:54.557 CC lib/ftl/ftl_l2p_flat.o 00:02:54.557 CC lib/ftl/ftl_band.o 00:02:54.557 CC lib/ftl/ftl_nv_cache.o 00:02:54.557 CC lib/nvmf/auth.o 00:02:54.557 CC lib/ftl/ftl_band_ops.o 00:02:54.557 CC lib/ftl/ftl_writer.o 00:02:54.557 CC lib/ftl/ftl_rq.o 00:02:54.557 CC lib/ftl/ftl_reloc.o 00:02:54.557 CC lib/ftl/ftl_l2p_cache.o 00:02:54.557 CC lib/ftl/ftl_p2l.o 00:02:54.557 CC lib/ftl/ftl_p2l_log.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:54.557 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:54.557 CC lib/ftl/utils/ftl_conf.o 00:02:54.557 CC lib/ftl/utils/ftl_md.o 00:02:54.557 CC lib/ftl/utils/ftl_mempool.o 00:02:54.557 CC lib/ftl/utils/ftl_property.o 00:02:54.557 CC lib/ftl/utils/ftl_bitmap.o 00:02:54.557 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.557 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.557 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.557 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.557 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.557 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.557 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.557 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.557 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.557 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.557 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.557 CC lib/ftl/base/ftl_base_dev.o 00:02:54.557 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.557 CC lib/ftl/ftl_trace.o 00:02:54.557 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.557 CC lib/ftl/base/ftl_base_bdev.o 00:02:55.124 LIB libspdk_nbd.a 00:02:55.124 SO libspdk_nbd.so.7.0 00:02:55.124 SYMLINK libspdk_nbd.so 00:02:55.382 LIB libspdk_ublk.a 00:02:55.382 LIB libspdk_scsi.a 00:02:55.382 SO libspdk_ublk.so.3.0 00:02:55.382 SO libspdk_scsi.so.9.0 00:02:55.382 SYMLINK libspdk_ublk.so 00:02:55.382 SYMLINK libspdk_scsi.so 00:02:55.642 LIB libspdk_ftl.a 00:02:55.642 CC lib/iscsi/conn.o 00:02:55.642 CC lib/iscsi/init_grp.o 00:02:55.642 CC lib/iscsi/iscsi.o 00:02:55.642 CC lib/iscsi/param.o 00:02:55.642 CC lib/iscsi/portal_grp.o 00:02:55.642 CC lib/iscsi/tgt_node.o 00:02:55.642 CC lib/iscsi/iscsi_subsystem.o 00:02:55.642 CC lib/iscsi/iscsi_rpc.o 00:02:55.642 CC lib/iscsi/task.o 00:02:55.642 CC lib/vhost/vhost.o 00:02:55.642 CC lib/vhost/vhost_rpc.o 00:02:55.642 CC lib/vhost/vhost_scsi.o 00:02:55.642 CC lib/vhost/vhost_blk.o 00:02:55.642 CC lib/vhost/rte_vhost_user.o 00:02:55.900 SO libspdk_ftl.so.9.0 00:02:55.900 SYMLINK libspdk_ftl.so 00:02:56.162 LIB libspdk_nvmf.a 00:02:56.421 SO libspdk_nvmf.so.20.0 00:02:56.421 SYMLINK libspdk_nvmf.so 00:02:56.421 LIB libspdk_vhost.a 00:02:56.680 SO libspdk_vhost.so.8.0 00:02:56.680 SYMLINK libspdk_vhost.so 00:02:56.680 LIB libspdk_iscsi.a 00:02:56.680 SO libspdk_iscsi.so.8.0 00:02:56.939 SYMLINK libspdk_iscsi.so 00:02:57.507 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.507 CC module/vfu_device/vfu_virtio.o 00:02:57.507 CC module/vfu_device/vfu_virtio_blk.o 00:02:57.507 CC module/vfu_device/vfu_virtio_scsi.o 00:02:57.507 CC module/vfu_device/vfu_virtio_rpc.o 00:02:57.507 CC module/vfu_device/vfu_virtio_fs.o 00:02:57.507 CC module/accel/error/accel_error_rpc.o 00:02:57.507 CC module/sock/posix/posix.o 00:02:57.507 CC module/accel/error/accel_error.o 00:02:57.507 CC module/blob/bdev/blob_bdev.o 00:02:57.507 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.507 CC module/accel/iaa/accel_iaa.o 00:02:57.507 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.507 LIB libspdk_env_dpdk_rpc.a 00:02:57.507 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.507 CC module/keyring/linux/keyring.o 00:02:57.507 CC module/accel/dsa/accel_dsa.o 00:02:57.507 CC module/accel/ioat/accel_ioat.o 00:02:57.507 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.507 CC module/keyring/linux/keyring_rpc.o 00:02:57.507 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.507 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.507 CC module/keyring/file/keyring.o 00:02:57.507 CC module/keyring/file/keyring_rpc.o 00:02:57.507 CC module/fsdev/aio/fsdev_aio.o 00:02:57.507 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.507 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.507 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.507 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.766 LIB libspdk_scheduler_gscheduler.a 00:02:57.766 LIB libspdk_keyring_linux.a 00:02:57.766 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.766 LIB libspdk_keyring_file.a 00:02:57.766 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.766 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.766 SO libspdk_keyring_linux.so.1.0 00:02:57.766 LIB libspdk_accel_ioat.a 00:02:57.766 LIB libspdk_accel_iaa.a 00:02:57.766 LIB libspdk_accel_error.a 00:02:57.766 LIB libspdk_scheduler_dynamic.a 00:02:57.766 SO libspdk_keyring_file.so.2.0 00:02:57.766 SO libspdk_accel_ioat.so.6.0 00:02:57.766 LIB libspdk_blob_bdev.a 00:02:57.766 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.766 SO libspdk_accel_error.so.2.0 00:02:57.766 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.766 SO libspdk_accel_iaa.so.3.0 00:02:57.766 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.766 SYMLINK libspdk_keyring_linux.so 00:02:57.766 SO libspdk_blob_bdev.so.11.0 00:02:57.766 LIB libspdk_accel_dsa.a 00:02:57.766 SYMLINK libspdk_keyring_file.so 00:02:57.766 SYMLINK libspdk_accel_error.so 00:02:57.766 SYMLINK libspdk_accel_ioat.so 00:02:57.766 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.766 SYMLINK libspdk_accel_iaa.so 00:02:57.766 SO libspdk_accel_dsa.so.5.0 00:02:57.766 SYMLINK libspdk_blob_bdev.so 00:02:57.766 LIB libspdk_vfu_device.a 00:02:58.026 SYMLINK libspdk_accel_dsa.so 00:02:58.026 SO libspdk_vfu_device.so.3.0 00:02:58.026 SYMLINK libspdk_vfu_device.so 00:02:58.026 LIB libspdk_fsdev_aio.a 00:02:58.026 LIB libspdk_sock_posix.a 00:02:58.026 SO libspdk_fsdev_aio.so.1.0 00:02:58.285 SO libspdk_sock_posix.so.6.0 00:02:58.285 SYMLINK libspdk_fsdev_aio.so 00:02:58.285 SYMLINK libspdk_sock_posix.so 00:02:58.285 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.285 CC module/bdev/gpt/gpt.o 00:02:58.285 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.285 CC module/bdev/delay/vbdev_delay.o 00:02:58.285 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.285 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.285 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.285 CC module/bdev/error/vbdev_error.o 00:02:58.285 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.285 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.285 CC module/bdev/null/bdev_null_rpc.o 00:02:58.285 CC module/bdev/null/bdev_null.o 00:02:58.285 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.285 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.285 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.285 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.285 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.285 CC module/bdev/aio/bdev_aio.o 00:02:58.285 CC module/bdev/raid/bdev_raid.o 00:02:58.285 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.285 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.285 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.285 CC module/bdev/malloc/bdev_malloc.o 00:02:58.285 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.285 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.285 CC module/bdev/split/vbdev_split.o 00:02:58.285 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.285 CC module/bdev/raid/raid0.o 00:02:58.285 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.285 CC module/bdev/ftl/bdev_ftl.o 00:02:58.285 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.285 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.285 CC module/bdev/raid/concat.o 00:02:58.285 CC module/bdev/raid/raid1.o 00:02:58.285 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.285 CC module/bdev/nvme/bdev_nvme.o 00:02:58.285 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.285 CC module/bdev/nvme/nvme_rpc.o 00:02:58.285 CC module/bdev/nvme/vbdev_opal.o 00:02:58.285 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.285 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.285 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.543 LIB libspdk_blobfs_bdev.a 00:02:58.543 SO libspdk_blobfs_bdev.so.6.0 00:02:58.543 LIB libspdk_bdev_split.a 00:02:58.543 LIB libspdk_bdev_null.a 00:02:58.543 LIB libspdk_bdev_error.a 00:02:58.543 SYMLINK libspdk_blobfs_bdev.so 00:02:58.543 SO libspdk_bdev_split.so.6.0 00:02:58.543 LIB libspdk_bdev_gpt.a 00:02:58.543 SO libspdk_bdev_error.so.6.0 00:02:58.543 SO libspdk_bdev_null.so.6.0 00:02:58.543 LIB libspdk_bdev_passthru.a 00:02:58.802 SO libspdk_bdev_gpt.so.6.0 00:02:58.802 LIB libspdk_bdev_ftl.a 00:02:58.802 LIB libspdk_bdev_delay.a 00:02:58.802 SYMLINK libspdk_bdev_split.so 00:02:58.802 LIB libspdk_bdev_iscsi.a 00:02:58.802 SO libspdk_bdev_passthru.so.6.0 00:02:58.802 SYMLINK libspdk_bdev_error.so 00:02:58.802 SYMLINK libspdk_bdev_null.so 00:02:58.802 SO libspdk_bdev_delay.so.6.0 00:02:58.802 LIB libspdk_bdev_zone_block.a 00:02:58.802 SO libspdk_bdev_ftl.so.6.0 00:02:58.802 SYMLINK libspdk_bdev_gpt.so 00:02:58.802 SO libspdk_bdev_iscsi.so.6.0 00:02:58.802 LIB libspdk_bdev_aio.a 00:02:58.802 LIB libspdk_bdev_malloc.a 00:02:58.802 SO libspdk_bdev_zone_block.so.6.0 00:02:58.802 SYMLINK libspdk_bdev_passthru.so 00:02:58.802 SO libspdk_bdev_aio.so.6.0 00:02:58.802 SO libspdk_bdev_malloc.so.6.0 00:02:58.802 SYMLINK libspdk_bdev_ftl.so 00:02:58.802 SYMLINK libspdk_bdev_delay.so 00:02:58.802 SYMLINK libspdk_bdev_iscsi.so 00:02:58.802 LIB libspdk_bdev_virtio.a 00:02:58.802 SYMLINK libspdk_bdev_malloc.so 00:02:58.802 SYMLINK libspdk_bdev_zone_block.so 00:02:58.802 SYMLINK libspdk_bdev_aio.so 00:02:58.802 LIB libspdk_bdev_lvol.a 00:02:58.802 SO libspdk_bdev_virtio.so.6.0 00:02:58.802 SO libspdk_bdev_lvol.so.6.0 00:02:58.802 SYMLINK libspdk_bdev_virtio.so 00:02:58.802 SYMLINK libspdk_bdev_lvol.so 00:02:59.060 LIB libspdk_bdev_raid.a 00:02:59.319 SO libspdk_bdev_raid.so.6.0 00:02:59.319 SYMLINK libspdk_bdev_raid.so 00:03:00.257 LIB libspdk_bdev_nvme.a 00:03:00.257 SO libspdk_bdev_nvme.so.7.1 00:03:00.257 SYMLINK libspdk_bdev_nvme.so 00:03:01.195 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.195 CC module/event/subsystems/keyring/keyring.o 00:03:01.195 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.195 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.195 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.195 CC module/event/subsystems/vmd/vmd.o 00:03:01.195 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.195 CC module/event/subsystems/sock/sock.o 00:03:01.195 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:01.195 CC module/event/subsystems/fsdev/fsdev.o 00:03:01.195 LIB libspdk_event_scheduler.a 00:03:01.195 LIB libspdk_event_fsdev.a 00:03:01.195 LIB libspdk_event_keyring.a 00:03:01.195 LIB libspdk_event_vfu_tgt.a 00:03:01.195 LIB libspdk_event_vmd.a 00:03:01.195 LIB libspdk_event_sock.a 00:03:01.195 LIB libspdk_event_iobuf.a 00:03:01.195 LIB libspdk_event_vhost_blk.a 00:03:01.195 SO libspdk_event_fsdev.so.1.0 00:03:01.195 SO libspdk_event_keyring.so.1.0 00:03:01.195 SO libspdk_event_scheduler.so.4.0 00:03:01.195 SO libspdk_event_iobuf.so.3.0 00:03:01.195 SO libspdk_event_vfu_tgt.so.3.0 00:03:01.195 SO libspdk_event_sock.so.5.0 00:03:01.195 SO libspdk_event_vmd.so.6.0 00:03:01.195 SO libspdk_event_vhost_blk.so.3.0 00:03:01.195 SYMLINK libspdk_event_fsdev.so 00:03:01.195 SYMLINK libspdk_event_keyring.so 00:03:01.195 SYMLINK libspdk_event_scheduler.so 00:03:01.195 SYMLINK libspdk_event_vfu_tgt.so 00:03:01.195 SYMLINK libspdk_event_iobuf.so 00:03:01.195 SYMLINK libspdk_event_vmd.so 00:03:01.195 SYMLINK libspdk_event_sock.so 00:03:01.195 SYMLINK libspdk_event_vhost_blk.so 00:03:01.454 CC module/event/subsystems/accel/accel.o 00:03:01.714 LIB libspdk_event_accel.a 00:03:01.714 SO libspdk_event_accel.so.6.0 00:03:01.714 SYMLINK libspdk_event_accel.so 00:03:01.973 CC module/event/subsystems/bdev/bdev.o 00:03:02.234 LIB libspdk_event_bdev.a 00:03:02.234 SO libspdk_event_bdev.so.6.0 00:03:02.234 SYMLINK libspdk_event_bdev.so 00:03:02.800 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.800 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.800 CC module/event/subsystems/scsi/scsi.o 00:03:02.800 CC module/event/subsystems/ublk/ublk.o 00:03:02.800 CC module/event/subsystems/nbd/nbd.o 00:03:02.800 LIB libspdk_event_ublk.a 00:03:02.800 LIB libspdk_event_nbd.a 00:03:02.800 LIB libspdk_event_scsi.a 00:03:02.800 SO libspdk_event_ublk.so.3.0 00:03:02.800 SO libspdk_event_nbd.so.6.0 00:03:02.800 SO libspdk_event_scsi.so.6.0 00:03:02.800 LIB libspdk_event_nvmf.a 00:03:02.800 SYMLINK libspdk_event_ublk.so 00:03:02.800 SO libspdk_event_nvmf.so.6.0 00:03:02.800 SYMLINK libspdk_event_nbd.so 00:03:02.800 SYMLINK libspdk_event_scsi.so 00:03:03.129 SYMLINK libspdk_event_nvmf.so 00:03:03.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.129 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.429 LIB libspdk_event_vhost_scsi.a 00:03:03.429 LIB libspdk_event_iscsi.a 00:03:03.429 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.429 SO libspdk_event_iscsi.so.6.0 00:03:03.429 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.429 SYMLINK libspdk_event_iscsi.so 00:03:03.687 SO libspdk.so.6.0 00:03:03.687 SYMLINK libspdk.so 00:03:03.947 CC app/spdk_lspci/spdk_lspci.o 00:03:03.947 CXX app/trace/trace.o 00:03:03.947 CC test/rpc_client/rpc_client_test.o 00:03:03.947 CC app/trace_record/trace_record.o 00:03:03.947 CC app/spdk_top/spdk_top.o 00:03:03.947 CC app/spdk_nvme_identify/identify.o 00:03:03.947 TEST_HEADER include/spdk/accel.h 00:03:03.947 TEST_HEADER include/spdk/accel_module.h 00:03:03.947 TEST_HEADER include/spdk/assert.h 00:03:03.947 TEST_HEADER include/spdk/barrier.h 00:03:03.947 TEST_HEADER include/spdk/bdev.h 00:03:03.947 TEST_HEADER include/spdk/base64.h 00:03:03.947 CC app/spdk_nvme_perf/perf.o 00:03:03.947 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.947 TEST_HEADER include/spdk/bdev_module.h 00:03:03.947 TEST_HEADER include/spdk/bit_array.h 00:03:03.947 TEST_HEADER include/spdk/bit_pool.h 00:03:03.947 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.947 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.947 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.947 TEST_HEADER include/spdk/blob.h 00:03:03.947 TEST_HEADER include/spdk/conf.h 00:03:03.947 TEST_HEADER include/spdk/blobfs.h 00:03:03.947 TEST_HEADER include/spdk/crc16.h 00:03:03.947 TEST_HEADER include/spdk/cpuset.h 00:03:03.947 TEST_HEADER include/spdk/config.h 00:03:03.947 TEST_HEADER include/spdk/crc32.h 00:03:03.947 TEST_HEADER include/spdk/dif.h 00:03:03.947 TEST_HEADER include/spdk/crc64.h 00:03:03.947 TEST_HEADER include/spdk/dma.h 00:03:03.947 TEST_HEADER include/spdk/endian.h 00:03:03.947 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.947 TEST_HEADER include/spdk/env.h 00:03:03.947 TEST_HEADER include/spdk/event.h 00:03:03.947 TEST_HEADER include/spdk/fd_group.h 00:03:03.947 TEST_HEADER include/spdk/fd.h 00:03:03.947 TEST_HEADER include/spdk/file.h 00:03:03.947 TEST_HEADER include/spdk/fsdev.h 00:03:03.947 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.947 TEST_HEADER include/spdk/ftl.h 00:03:03.947 TEST_HEADER include/spdk/hexlify.h 00:03:03.947 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.947 TEST_HEADER include/spdk/histogram_data.h 00:03:03.947 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:03.947 TEST_HEADER include/spdk/idxd.h 00:03:03.947 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.947 TEST_HEADER include/spdk/init.h 00:03:03.947 TEST_HEADER include/spdk/ioat.h 00:03:03.947 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.947 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.947 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.947 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.947 TEST_HEADER include/spdk/keyring.h 00:03:03.947 TEST_HEADER include/spdk/json.h 00:03:03.947 TEST_HEADER include/spdk/likely.h 00:03:03.947 TEST_HEADER include/spdk/keyring_module.h 00:03:03.947 TEST_HEADER include/spdk/lvol.h 00:03:03.947 TEST_HEADER include/spdk/memory.h 00:03:03.947 TEST_HEADER include/spdk/log.h 00:03:03.947 TEST_HEADER include/spdk/mmio.h 00:03:03.947 TEST_HEADER include/spdk/nbd.h 00:03:03.947 TEST_HEADER include/spdk/md5.h 00:03:03.947 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.947 TEST_HEADER include/spdk/net.h 00:03:03.947 CC app/spdk_dd/spdk_dd.o 00:03:03.947 TEST_HEADER include/spdk/nvme.h 00:03:03.947 TEST_HEADER include/spdk/notify.h 00:03:03.947 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.947 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.947 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.947 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.947 CC app/nvmf_tgt/nvmf_main.o 00:03:03.947 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.947 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.947 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.947 TEST_HEADER include/spdk/nvmf.h 00:03:03.947 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.947 TEST_HEADER include/spdk/opal_spec.h 00:03:03.947 TEST_HEADER include/spdk/opal.h 00:03:03.947 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.947 TEST_HEADER include/spdk/pci_ids.h 00:03:03.947 TEST_HEADER include/spdk/queue.h 00:03:03.947 TEST_HEADER include/spdk/reduce.h 00:03:03.947 TEST_HEADER include/spdk/pipe.h 00:03:03.947 TEST_HEADER include/spdk/scheduler.h 00:03:03.947 TEST_HEADER include/spdk/rpc.h 00:03:03.947 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.947 TEST_HEADER include/spdk/scsi.h 00:03:03.947 TEST_HEADER include/spdk/stdinc.h 00:03:03.947 TEST_HEADER include/spdk/sock.h 00:03:03.947 TEST_HEADER include/spdk/string.h 00:03:03.947 TEST_HEADER include/spdk/thread.h 00:03:03.947 TEST_HEADER include/spdk/trace.h 00:03:03.947 TEST_HEADER include/spdk/tree.h 00:03:03.947 TEST_HEADER include/spdk/ublk.h 00:03:03.947 TEST_HEADER include/spdk/util.h 00:03:03.947 TEST_HEADER include/spdk/uuid.h 00:03:04.216 TEST_HEADER include/spdk/trace_parser.h 00:03:04.216 TEST_HEADER include/spdk/version.h 00:03:04.216 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.216 TEST_HEADER include/spdk/vmd.h 00:03:04.216 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.216 TEST_HEADER include/spdk/xor.h 00:03:04.216 TEST_HEADER include/spdk/vhost.h 00:03:04.216 CXX test/cpp_headers/accel.o 00:03:04.216 TEST_HEADER include/spdk/zipf.h 00:03:04.216 CXX test/cpp_headers/accel_module.o 00:03:04.216 CXX test/cpp_headers/assert.o 00:03:04.216 CXX test/cpp_headers/barrier.o 00:03:04.216 CXX test/cpp_headers/base64.o 00:03:04.216 CXX test/cpp_headers/bdev.o 00:03:04.216 CXX test/cpp_headers/bdev_zone.o 00:03:04.216 CXX test/cpp_headers/bdev_module.o 00:03:04.216 CXX test/cpp_headers/bit_array.o 00:03:04.216 CXX test/cpp_headers/blob_bdev.o 00:03:04.216 CXX test/cpp_headers/bit_pool.o 00:03:04.216 CC app/spdk_tgt/spdk_tgt.o 00:03:04.216 CXX test/cpp_headers/blob.o 00:03:04.216 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.216 CXX test/cpp_headers/blobfs.o 00:03:04.216 CXX test/cpp_headers/conf.o 00:03:04.216 CXX test/cpp_headers/config.o 00:03:04.216 CXX test/cpp_headers/cpuset.o 00:03:04.216 CXX test/cpp_headers/crc16.o 00:03:04.216 CXX test/cpp_headers/crc32.o 00:03:04.216 CXX test/cpp_headers/crc64.o 00:03:04.216 CXX test/cpp_headers/endian.o 00:03:04.216 CXX test/cpp_headers/env_dpdk.o 00:03:04.216 CXX test/cpp_headers/dif.o 00:03:04.216 CXX test/cpp_headers/dma.o 00:03:04.216 CXX test/cpp_headers/fd_group.o 00:03:04.216 CXX test/cpp_headers/env.o 00:03:04.216 CXX test/cpp_headers/event.o 00:03:04.216 CXX test/cpp_headers/fd.o 00:03:04.216 CXX test/cpp_headers/fsdev.o 00:03:04.216 CXX test/cpp_headers/fsdev_module.o 00:03:04.216 CXX test/cpp_headers/ftl.o 00:03:04.216 CXX test/cpp_headers/hexlify.o 00:03:04.216 CXX test/cpp_headers/gpt_spec.o 00:03:04.216 CXX test/cpp_headers/fuse_dispatcher.o 00:03:04.216 CXX test/cpp_headers/file.o 00:03:04.216 CXX test/cpp_headers/histogram_data.o 00:03:04.216 CXX test/cpp_headers/idxd.o 00:03:04.216 CXX test/cpp_headers/idxd_spec.o 00:03:04.216 CXX test/cpp_headers/init.o 00:03:04.216 CXX test/cpp_headers/ioat.o 00:03:04.216 CXX test/cpp_headers/ioat_spec.o 00:03:04.216 CXX test/cpp_headers/iscsi_spec.o 00:03:04.216 CXX test/cpp_headers/json.o 00:03:04.216 CXX test/cpp_headers/jsonrpc.o 00:03:04.216 CXX test/cpp_headers/keyring.o 00:03:04.216 CXX test/cpp_headers/log.o 00:03:04.216 CXX test/cpp_headers/likely.o 00:03:04.216 CXX test/cpp_headers/keyring_module.o 00:03:04.216 CXX test/cpp_headers/lvol.o 00:03:04.216 CXX test/cpp_headers/md5.o 00:03:04.216 CXX test/cpp_headers/nbd.o 00:03:04.216 CXX test/cpp_headers/memory.o 00:03:04.216 CXX test/cpp_headers/mmio.o 00:03:04.216 CXX test/cpp_headers/net.o 00:03:04.216 CXX test/cpp_headers/notify.o 00:03:04.216 CXX test/cpp_headers/nvme.o 00:03:04.217 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.217 CXX test/cpp_headers/nvme_intel.o 00:03:04.217 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.217 CXX test/cpp_headers/nvme_spec.o 00:03:04.217 CXX test/cpp_headers/nvme_zns.o 00:03:04.217 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.217 CXX test/cpp_headers/nvmf.o 00:03:04.217 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.217 CXX test/cpp_headers/nvmf_spec.o 00:03:04.217 CXX test/cpp_headers/nvmf_transport.o 00:03:04.217 CXX test/cpp_headers/opal.o 00:03:04.217 CC test/env/vtophys/vtophys.o 00:03:04.217 CC test/thread/poller_perf/poller_perf.o 00:03:04.217 CC examples/ioat/verify/verify.o 00:03:04.217 CC test/env/pci/pci_ut.o 00:03:04.217 CC test/app/histogram_perf/histogram_perf.o 00:03:04.217 CC test/env/memory/memory_ut.o 00:03:04.217 CC test/app/stub/stub.o 00:03:04.217 CC examples/ioat/perf/perf.o 00:03:04.217 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.217 CC test/app/jsoncat/jsoncat.o 00:03:04.217 CC app/fio/nvme/fio_plugin.o 00:03:04.217 CC examples/util/zipf/zipf.o 00:03:04.217 CC app/fio/bdev/fio_plugin.o 00:03:04.217 CC test/app/bdev_svc/bdev_svc.o 00:03:04.217 LINK spdk_lspci 00:03:04.217 CC test/dma/test_dma/test_dma.o 00:03:04.477 LINK interrupt_tgt 00:03:04.477 LINK rpc_client_test 00:03:04.477 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.737 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.737 LINK iscsi_tgt 00:03:04.737 LINK vtophys 00:03:04.737 LINK histogram_perf 00:03:04.737 LINK poller_perf 00:03:04.737 LINK spdk_nvme_discover 00:03:04.737 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.737 LINK nvmf_tgt 00:03:04.737 CXX test/cpp_headers/opal_spec.o 00:03:04.737 CXX test/cpp_headers/pci_ids.o 00:03:04.737 CXX test/cpp_headers/pipe.o 00:03:04.737 CXX test/cpp_headers/queue.o 00:03:04.737 CXX test/cpp_headers/reduce.o 00:03:04.737 CXX test/cpp_headers/rpc.o 00:03:04.737 CXX test/cpp_headers/scheduler.o 00:03:04.737 CXX test/cpp_headers/scsi.o 00:03:04.737 CXX test/cpp_headers/scsi_spec.o 00:03:04.737 CXX test/cpp_headers/sock.o 00:03:04.737 CXX test/cpp_headers/stdinc.o 00:03:04.737 CXX test/cpp_headers/string.o 00:03:04.737 CXX test/cpp_headers/thread.o 00:03:04.737 CXX test/cpp_headers/trace.o 00:03:04.737 CXX test/cpp_headers/trace_parser.o 00:03:04.737 CXX test/cpp_headers/tree.o 00:03:04.737 CXX test/cpp_headers/ublk.o 00:03:04.737 CXX test/cpp_headers/util.o 00:03:04.737 CXX test/cpp_headers/uuid.o 00:03:04.737 CXX test/cpp_headers/version.o 00:03:04.737 CXX test/cpp_headers/vfio_user_pci.o 00:03:04.737 CXX test/cpp_headers/vfio_user_spec.o 00:03:04.737 CXX test/cpp_headers/vhost.o 00:03:04.737 CXX test/cpp_headers/vmd.o 00:03:04.737 CXX test/cpp_headers/xor.o 00:03:04.737 LINK spdk_trace_record 00:03:04.737 CXX test/cpp_headers/zipf.o 00:03:04.737 LINK verify 00:03:04.737 LINK jsoncat 00:03:04.737 LINK zipf 00:03:04.737 LINK env_dpdk_post_init 00:03:04.737 LINK spdk_dd 00:03:04.737 LINK stub 00:03:04.994 LINK spdk_tgt 00:03:04.994 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.994 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.994 LINK bdev_svc 00:03:04.994 LINK spdk_trace 00:03:04.994 LINK ioat_perf 00:03:04.994 LINK pci_ut 00:03:05.253 LINK spdk_nvme_identify 00:03:05.253 LINK nvme_fuzz 00:03:05.253 CC test/event/event_perf/event_perf.o 00:03:05.253 CC test/event/reactor/reactor.o 00:03:05.253 LINK spdk_nvme 00:03:05.253 CC test/event/reactor_perf/reactor_perf.o 00:03:05.253 CC test/event/app_repeat/app_repeat.o 00:03:05.253 CC test/event/scheduler/scheduler.o 00:03:05.253 LINK spdk_nvme_perf 00:03:05.253 CC app/vhost/vhost.o 00:03:05.253 LINK test_dma 00:03:05.253 LINK spdk_top 00:03:05.253 CC examples/sock/hello_world/hello_sock.o 00:03:05.253 CC examples/vmd/led/led.o 00:03:05.253 CC examples/idxd/perf/perf.o 00:03:05.253 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.253 LINK vhost_fuzz 00:03:05.253 LINK spdk_bdev 00:03:05.253 LINK event_perf 00:03:05.253 CC examples/thread/thread/thread_ex.o 00:03:05.253 LINK mem_callbacks 00:03:05.253 LINK reactor 00:03:05.510 LINK reactor_perf 00:03:05.510 LINK app_repeat 00:03:05.510 LINK lsvmd 00:03:05.510 LINK led 00:03:05.511 LINK vhost 00:03:05.511 LINK scheduler 00:03:05.511 LINK hello_sock 00:03:05.511 LINK thread 00:03:05.511 LINK idxd_perf 00:03:05.768 CC test/nvme/aer/aer.o 00:03:05.768 CC test/nvme/sgl/sgl.o 00:03:05.768 CC test/nvme/e2edp/nvme_dp.o 00:03:05.768 CC test/nvme/simple_copy/simple_copy.o 00:03:05.768 CC test/nvme/boot_partition/boot_partition.o 00:03:05.768 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.768 CC test/nvme/fdp/fdp.o 00:03:05.768 CC test/nvme/cuse/cuse.o 00:03:05.768 CC test/nvme/compliance/nvme_compliance.o 00:03:05.768 CC test/nvme/connect_stress/connect_stress.o 00:03:05.768 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.768 CC test/nvme/err_injection/err_injection.o 00:03:05.768 CC test/nvme/reserve/reserve.o 00:03:05.768 CC test/nvme/startup/startup.o 00:03:05.768 CC test/nvme/overhead/overhead.o 00:03:05.768 LINK memory_ut 00:03:05.768 CC test/nvme/reset/reset.o 00:03:05.768 CC test/accel/dif/dif.o 00:03:05.768 CC test/blobfs/mkfs/mkfs.o 00:03:06.027 CC test/lvol/esnap/esnap.o 00:03:06.027 CC examples/nvme/reconnect/reconnect.o 00:03:06.027 LINK boot_partition 00:03:06.027 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.027 CC examples/nvme/abort/abort.o 00:03:06.027 CC examples/nvme/hello_world/hello_world.o 00:03:06.027 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.027 CC examples/nvme/arbitration/arbitration.o 00:03:06.027 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.027 CC examples/nvme/hotplug/hotplug.o 00:03:06.027 LINK err_injection 00:03:06.027 LINK fused_ordering 00:03:06.027 LINK connect_stress 00:03:06.027 LINK simple_copy 00:03:06.027 LINK reserve 00:03:06.027 LINK startup 00:03:06.027 LINK doorbell_aers 00:03:06.027 LINK sgl 00:03:06.027 LINK nvme_dp 00:03:06.027 LINK mkfs 00:03:06.027 LINK aer 00:03:06.027 LINK reset 00:03:06.027 CC examples/accel/perf/accel_perf.o 00:03:06.027 LINK overhead 00:03:06.027 LINK fdp 00:03:06.027 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.027 CC examples/blob/hello_world/hello_blob.o 00:03:06.027 CC examples/blob/cli/blobcli.o 00:03:06.027 LINK nvme_compliance 00:03:06.284 LINK cmb_copy 00:03:06.284 LINK hello_world 00:03:06.284 LINK pmr_persistence 00:03:06.284 LINK hotplug 00:03:06.284 LINK iscsi_fuzz 00:03:06.284 LINK reconnect 00:03:06.284 LINK arbitration 00:03:06.284 LINK abort 00:03:06.284 LINK hello_blob 00:03:06.284 LINK hello_fsdev 00:03:06.284 LINK nvme_manage 00:03:06.542 LINK dif 00:03:06.542 LINK accel_perf 00:03:06.542 LINK blobcli 00:03:06.799 LINK cuse 00:03:07.059 CC test/bdev/bdevio/bdevio.o 00:03:07.059 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.059 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.317 LINK hello_bdev 00:03:07.317 LINK bdevio 00:03:07.576 LINK bdevperf 00:03:08.143 CC examples/nvmf/nvmf/nvmf.o 00:03:08.401 LINK nvmf 00:03:09.775 LINK esnap 00:03:09.775 00:03:09.775 real 0m55.887s 00:03:09.775 user 8m3.553s 00:03:09.775 sys 3m41.150s 00:03:09.775 12:12:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.775 12:12:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:09.775 ************************************ 00:03:09.775 END TEST make 00:03:09.775 ************************************ 00:03:09.775 12:12:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:09.775 12:12:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:09.775 12:12:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:09.775 12:12:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.775 12:12:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:09.775 12:12:52 -- pm/common@44 -- $ pid=165081 00:03:09.775 12:12:52 -- pm/common@50 -- $ kill -TERM 165081 00:03:09.775 12:12:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.775 12:12:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:09.775 12:12:52 -- pm/common@44 -- $ pid=165082 00:03:09.775 12:12:52 -- pm/common@50 -- $ kill -TERM 165082 00:03:09.775 12:12:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.775 12:12:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:09.775 12:12:52 -- pm/common@44 -- $ pid=165085 00:03:09.775 12:12:52 -- pm/common@50 -- $ kill -TERM 165085 00:03:09.775 12:12:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.775 12:12:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:09.775 12:12:52 -- pm/common@44 -- $ pid=165113 00:03:09.775 12:12:52 -- pm/common@50 -- $ sudo -E kill -TERM 165113 00:03:09.775 12:12:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:09.775 12:12:52 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.034 12:12:52 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:10.034 12:12:52 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:10.034 12:12:52 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:10.034 12:12:52 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:10.034 12:12:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.034 12:12:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.034 12:12:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.034 12:12:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.034 12:12:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.034 12:12:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.034 12:12:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.034 12:12:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.034 12:12:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.034 12:12:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.034 12:12:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.034 12:12:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.034 12:12:52 -- scripts/common.sh@345 -- # : 1 00:03:10.034 12:12:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.034 12:12:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.034 12:12:52 -- scripts/common.sh@365 -- # decimal 1 00:03:10.034 12:12:52 -- scripts/common.sh@353 -- # local d=1 00:03:10.034 12:12:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.034 12:12:52 -- scripts/common.sh@355 -- # echo 1 00:03:10.034 12:12:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.034 12:12:52 -- scripts/common.sh@366 -- # decimal 2 00:03:10.034 12:12:52 -- scripts/common.sh@353 -- # local d=2 00:03:10.034 12:12:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.034 12:12:52 -- scripts/common.sh@355 -- # echo 2 00:03:10.034 12:12:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.034 12:12:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.034 12:12:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.034 12:12:52 -- scripts/common.sh@368 -- # return 0 00:03:10.034 12:12:52 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.034 12:12:52 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:10.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.034 --rc genhtml_branch_coverage=1 00:03:10.034 --rc genhtml_function_coverage=1 00:03:10.034 --rc genhtml_legend=1 00:03:10.034 --rc geninfo_all_blocks=1 00:03:10.034 --rc geninfo_unexecuted_blocks=1 00:03:10.034 00:03:10.034 ' 00:03:10.034 12:12:52 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:10.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.034 --rc genhtml_branch_coverage=1 00:03:10.034 --rc genhtml_function_coverage=1 00:03:10.034 --rc genhtml_legend=1 00:03:10.034 --rc geninfo_all_blocks=1 00:03:10.034 --rc geninfo_unexecuted_blocks=1 00:03:10.034 00:03:10.034 ' 00:03:10.034 12:12:52 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:10.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.034 --rc genhtml_branch_coverage=1 00:03:10.034 --rc genhtml_function_coverage=1 00:03:10.034 --rc genhtml_legend=1 00:03:10.034 --rc geninfo_all_blocks=1 00:03:10.034 --rc geninfo_unexecuted_blocks=1 00:03:10.034 00:03:10.034 ' 00:03:10.034 12:12:52 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:10.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.034 --rc genhtml_branch_coverage=1 00:03:10.034 --rc genhtml_function_coverage=1 00:03:10.034 --rc genhtml_legend=1 00:03:10.034 --rc geninfo_all_blocks=1 00:03:10.034 --rc geninfo_unexecuted_blocks=1 00:03:10.034 00:03:10.034 ' 00:03:10.034 12:12:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.034 12:12:52 -- nvmf/common.sh@7 -- # uname -s 00:03:10.034 12:12:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.034 12:12:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.034 12:12:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.034 12:12:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.034 12:12:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.034 12:12:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.034 12:12:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.034 12:12:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.034 12:12:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.034 12:12:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.034 12:12:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:10.034 12:12:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:10.034 12:12:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.034 12:12:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.034 12:12:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:10.034 12:12:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.034 12:12:53 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.034 12:12:53 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.034 12:12:53 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.034 12:12:53 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.034 12:12:53 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.035 12:12:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.035 12:12:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.035 12:12:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.035 12:12:53 -- paths/export.sh@5 -- # export PATH 00:03:10.035 12:12:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.035 12:12:53 -- nvmf/common.sh@51 -- # : 0 00:03:10.035 12:12:53 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:10.035 12:12:53 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:10.035 12:12:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.035 12:12:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.035 12:12:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.035 12:12:53 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:10.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:10.035 12:12:53 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:10.035 12:12:53 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:10.035 12:12:53 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:10.035 12:12:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.035 12:12:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.035 12:12:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.035 12:12:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.035 12:12:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.035 12:12:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.035 12:12:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.035 12:12:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.035 12:12:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.035 12:12:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.035 12:12:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.035 12:12:53 -- spdk/autotest.sh@48 -- # udevadm_pid=228069 00:03:10.035 12:12:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.035 12:12:53 -- pm/common@17 -- # local monitor 00:03:10.035 12:12:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.035 12:12:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.035 12:12:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.035 12:12:53 -- pm/common@21 -- # date +%s 00:03:10.035 12:12:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.035 12:12:53 -- pm/common@21 -- # date +%s 00:03:10.035 12:12:53 -- pm/common@25 -- # sleep 1 00:03:10.035 12:12:53 -- pm/common@21 -- # date +%s 00:03:10.035 12:12:53 -- pm/common@21 -- # date +%s 00:03:10.035 12:12:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101173 00:03:10.035 12:12:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101173 00:03:10.035 12:12:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101173 00:03:10.035 12:12:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732101173 00:03:10.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101173_collect-vmstat.pm.log 00:03:10.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101173_collect-cpu-load.pm.log 00:03:10.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101173_collect-cpu-temp.pm.log 00:03:10.035 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732101173_collect-bmc-pm.bmc.pm.log 00:03:10.972 12:12:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.972 12:12:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.972 12:12:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.972 12:12:54 -- common/autotest_common.sh@10 -- # set +x 00:03:10.972 12:12:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.972 12:12:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:10.972 12:12:54 -- common/autotest_common.sh@10 -- # set +x 00:03:11.231 12:12:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:11.231 12:12:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.231 12:12:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.231 12:12:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:11.231 12:12:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.231 12:12:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.231 12:12:54 -- common/autotest_common.sh@1457 -- # uname 00:03:11.231 12:12:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:11.231 12:12:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.231 12:12:54 -- common/autotest_common.sh@1477 -- # uname 00:03:11.231 12:12:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:11.231 12:12:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:11.231 12:12:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:11.231 lcov: LCOV version 1.15 00:03:11.231 12:12:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:33.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:33.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:36.454 12:13:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:36.454 12:13:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.454 12:13:19 -- common/autotest_common.sh@10 -- # set +x 00:03:36.454 12:13:19 -- spdk/autotest.sh@78 -- # rm -f 00:03:36.454 12:13:19 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.742 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:39.742 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.742 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.742 12:13:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:39.742 12:13:22 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:39.742 12:13:22 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:39.742 12:13:22 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:39.742 12:13:22 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:39.742 12:13:22 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:39.742 12:13:22 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:39.742 12:13:22 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.742 12:13:22 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:39.742 12:13:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:39.742 12:13:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.742 12:13:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:39.742 12:13:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:39.742 12:13:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:39.742 12:13:22 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:39.742 No valid GPT data, bailing 00:03:39.742 12:13:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.742 12:13:22 -- scripts/common.sh@394 -- # pt= 00:03:39.742 12:13:22 -- scripts/common.sh@395 -- # return 1 00:03:39.742 12:13:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:39.742 1+0 records in 00:03:39.742 1+0 records out 00:03:39.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449105 s, 233 MB/s 00:03:39.742 12:13:22 -- spdk/autotest.sh@105 -- # sync 00:03:39.742 12:13:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.742 12:13:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.742 12:13:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.314 12:13:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:46.314 12:13:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:46.314 12:13:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:46.314 12:13:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:48.219 Hugepages 00:03:48.219 node hugesize free / total 00:03:48.219 node0 1048576kB 0 / 0 00:03:48.219 node0 2048kB 0 / 0 00:03:48.219 node1 1048576kB 0 / 0 00:03:48.219 node1 2048kB 0 / 0 00:03:48.219 00:03:48.219 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.219 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:48.219 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:48.219 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:48.220 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:48.220 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:48.220 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:48.220 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:48.220 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:48.220 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:48.220 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:48.220 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:48.220 12:13:31 -- spdk/autotest.sh@117 -- # uname -s 00:03:48.220 12:13:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:48.220 12:13:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:48.220 12:13:31 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.509 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.509 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.078 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.078 12:13:35 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:53.017 12:13:36 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:53.017 12:13:36 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:53.017 12:13:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.017 12:13:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:53.017 12:13:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.017 12:13:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.017 12:13:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.017 12:13:36 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.017 12:13:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.277 12:13:36 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.277 12:13:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:53.277 12:13:36 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.813 Waiting for block devices as requested 00:03:56.072 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:56.072 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:56.072 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:56.330 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:56.330 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:56.330 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:56.589 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:56.589 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:56.589 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:56.589 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:56.848 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:56.848 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:56.848 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:57.107 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:57.107 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:57.107 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:57.366 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:57.366 12:13:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:57.366 12:13:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:57.366 12:13:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:57.366 12:13:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:57.366 12:13:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:57.366 12:13:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:57.366 12:13:40 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:57.366 12:13:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:57.366 12:13:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:57.366 12:13:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:57.366 12:13:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:57.366 12:13:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:57.366 12:13:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:57.366 12:13:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:57.366 12:13:40 -- common/autotest_common.sh@1543 -- # continue 00:03:57.366 12:13:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:57.366 12:13:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.366 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:03:57.366 12:13:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:57.366 12:13:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.366 12:13:40 -- common/autotest_common.sh@10 -- # set +x 00:03:57.366 12:13:40 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.112 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.371 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.309 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.309 12:13:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.309 12:13:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.309 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:04:01.309 12:13:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.309 12:13:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:01.309 12:13:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.309 12:13:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:01.309 12:13:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:01.309 12:13:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:01.309 12:13:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.309 12:13:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:01.309 12:13:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.309 12:13:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.309 12:13:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.309 12:13:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.309 12:13:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.309 12:13:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:01.309 12:13:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:01.309 12:13:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:01.569 12:13:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:01.569 12:13:44 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:01.569 12:13:44 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:01.569 12:13:44 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:01.569 12:13:44 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:01.569 12:13:44 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:01.569 12:13:44 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:01.569 12:13:44 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=242494 00:04:01.569 12:13:44 -- common/autotest_common.sh@1585 -- # waitforlisten 242494 00:04:01.569 12:13:44 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.569 12:13:44 -- common/autotest_common.sh@835 -- # '[' -z 242494 ']' 00:04:01.569 12:13:44 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.569 12:13:44 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.569 12:13:44 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.569 12:13:44 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.569 12:13:44 -- common/autotest_common.sh@10 -- # set +x 00:04:01.569 [2024-11-20 12:13:44.487201] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:01.569 [2024-11-20 12:13:44.487250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242494 ] 00:04:01.569 [2024-11-20 12:13:44.562620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.569 [2024-11-20 12:13:44.605383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.828 12:13:44 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.828 12:13:44 -- common/autotest_common.sh@868 -- # return 0 00:04:01.828 12:13:44 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:01.828 12:13:44 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:01.828 12:13:44 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:05.117 nvme0n1 00:04:05.117 12:13:47 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:05.117 [2024-11-20 12:13:48.013852] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:05.117 request: 00:04:05.117 { 00:04:05.117 "nvme_ctrlr_name": "nvme0", 00:04:05.117 "password": "test", 00:04:05.117 "method": "bdev_nvme_opal_revert", 00:04:05.117 "req_id": 1 00:04:05.117 } 00:04:05.117 Got JSON-RPC error response 00:04:05.117 response: 00:04:05.117 { 00:04:05.117 "code": -32602, 00:04:05.117 "message": "Invalid parameters" 00:04:05.117 } 00:04:05.117 12:13:48 -- common/autotest_common.sh@1591 -- # true 00:04:05.117 12:13:48 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:05.117 12:13:48 -- common/autotest_common.sh@1595 -- # killprocess 242494 00:04:05.117 12:13:48 -- common/autotest_common.sh@954 -- # '[' -z 242494 ']' 00:04:05.117 12:13:48 -- common/autotest_common.sh@958 -- # kill -0 242494 00:04:05.117 12:13:48 -- common/autotest_common.sh@959 -- # uname 00:04:05.117 12:13:48 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.117 12:13:48 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 242494 00:04:05.117 12:13:48 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.117 12:13:48 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.117 12:13:48 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 242494' 00:04:05.117 killing process with pid 242494 00:04:05.117 12:13:48 -- common/autotest_common.sh@973 -- # kill 242494 00:04:05.117 12:13:48 -- common/autotest_common.sh@978 -- # wait 242494 00:04:07.021 12:13:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:07.021 12:13:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:07.021 12:13:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.021 12:13:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.021 12:13:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:07.021 12:13:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.021 12:13:49 -- common/autotest_common.sh@10 -- # set +x 00:04:07.021 12:13:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:07.021 12:13:49 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.021 12:13:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.021 12:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.021 12:13:49 -- common/autotest_common.sh@10 -- # set +x 00:04:07.021 ************************************ 00:04:07.021 START TEST env 00:04:07.021 ************************************ 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.021 * Looking for test storage... 00:04:07.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.021 12:13:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.021 12:13:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.021 12:13:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.021 12:13:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.021 12:13:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.021 12:13:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.021 12:13:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.021 12:13:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.021 12:13:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.021 12:13:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.021 12:13:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.021 12:13:49 env -- scripts/common.sh@344 -- # case "$op" in 00:04:07.021 12:13:49 env -- scripts/common.sh@345 -- # : 1 00:04:07.021 12:13:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.021 12:13:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.021 12:13:49 env -- scripts/common.sh@365 -- # decimal 1 00:04:07.021 12:13:49 env -- scripts/common.sh@353 -- # local d=1 00:04:07.021 12:13:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.021 12:13:49 env -- scripts/common.sh@355 -- # echo 1 00:04:07.021 12:13:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.021 12:13:49 env -- scripts/common.sh@366 -- # decimal 2 00:04:07.021 12:13:49 env -- scripts/common.sh@353 -- # local d=2 00:04:07.021 12:13:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.021 12:13:49 env -- scripts/common.sh@355 -- # echo 2 00:04:07.021 12:13:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.021 12:13:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.021 12:13:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.021 12:13:49 env -- scripts/common.sh@368 -- # return 0 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.021 --rc genhtml_branch_coverage=1 00:04:07.021 --rc genhtml_function_coverage=1 00:04:07.021 --rc genhtml_legend=1 00:04:07.021 --rc geninfo_all_blocks=1 00:04:07.021 --rc geninfo_unexecuted_blocks=1 00:04:07.021 00:04:07.021 ' 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.021 --rc genhtml_branch_coverage=1 00:04:07.021 --rc genhtml_function_coverage=1 00:04:07.021 --rc genhtml_legend=1 00:04:07.021 --rc geninfo_all_blocks=1 00:04:07.021 --rc geninfo_unexecuted_blocks=1 00:04:07.021 00:04:07.021 ' 00:04:07.021 12:13:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.021 --rc genhtml_branch_coverage=1 00:04:07.021 --rc genhtml_function_coverage=1 00:04:07.021 --rc genhtml_legend=1 00:04:07.021 --rc geninfo_all_blocks=1 00:04:07.021 --rc geninfo_unexecuted_blocks=1 00:04:07.021 00:04:07.021 ' 00:04:07.022 12:13:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.022 --rc genhtml_branch_coverage=1 00:04:07.022 --rc genhtml_function_coverage=1 00:04:07.022 --rc genhtml_legend=1 00:04:07.022 --rc geninfo_all_blocks=1 00:04:07.022 --rc geninfo_unexecuted_blocks=1 00:04:07.022 00:04:07.022 ' 00:04:07.022 12:13:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.022 12:13:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.022 12:13:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.022 12:13:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.022 ************************************ 00:04:07.022 START TEST env_memory 00:04:07.022 ************************************ 00:04:07.022 12:13:49 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.022 00:04:07.022 00:04:07.022 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.022 http://cunit.sourceforge.net/ 00:04:07.022 00:04:07.022 00:04:07.022 Suite: memory 00:04:07.022 Test: alloc and free memory map ...[2024-11-20 12:13:49.928650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.022 passed 00:04:07.022 Test: mem map translation ...[2024-11-20 12:13:49.947588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.022 [2024-11-20 12:13:49.947604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.022 [2024-11-20 12:13:49.947638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.022 [2024-11-20 12:13:49.947660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.022 passed 00:04:07.022 Test: mem map registration ...[2024-11-20 12:13:49.985445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:07.022 [2024-11-20 12:13:49.985459] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:07.022 passed 00:04:07.022 Test: mem map adjacent registrations ...passed 00:04:07.022 00:04:07.022 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.022 suites 1 1 n/a 0 0 00:04:07.022 tests 4 4 4 0 0 00:04:07.022 asserts 152 152 152 0 n/a 00:04:07.022 00:04:07.022 Elapsed time = 0.141 seconds 00:04:07.022 00:04:07.022 real 0m0.154s 00:04:07.022 user 0m0.143s 00:04:07.022 sys 0m0.010s 00:04:07.022 12:13:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.022 12:13:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.022 ************************************ 00:04:07.022 END TEST env_memory 00:04:07.022 ************************************ 00:04:07.022 12:13:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.022 12:13:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.022 12:13:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.022 12:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.022 ************************************ 00:04:07.022 START TEST env_vtophys 00:04:07.022 ************************************ 00:04:07.022 12:13:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.022 EAL: lib.eal log level changed from notice to debug 00:04:07.022 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.022 EAL: Detected lcore 1 as core 1 on socket 0 00:04:07.022 EAL: Detected lcore 2 as core 2 on socket 0 00:04:07.022 EAL: Detected lcore 3 as core 3 on socket 0 00:04:07.022 EAL: Detected lcore 4 as core 4 on socket 0 00:04:07.022 EAL: Detected lcore 5 as core 5 on socket 0 00:04:07.022 EAL: Detected lcore 6 as core 6 on socket 0 00:04:07.022 EAL: Detected lcore 7 as core 8 on socket 0 00:04:07.022 EAL: Detected lcore 8 as core 9 on socket 0 00:04:07.022 EAL: Detected lcore 9 as core 10 on socket 0 00:04:07.022 EAL: Detected lcore 10 as core 11 on socket 0 00:04:07.022 EAL: Detected lcore 11 as core 12 on socket 0 00:04:07.022 EAL: Detected lcore 12 as core 13 on socket 0 00:04:07.022 EAL: Detected lcore 13 as core 16 on socket 0 00:04:07.022 EAL: Detected lcore 14 as core 17 on socket 0 00:04:07.022 EAL: Detected lcore 15 as core 18 on socket 0 00:04:07.022 EAL: Detected lcore 16 as core 19 on socket 0 00:04:07.022 EAL: Detected lcore 17 as core 20 on socket 0 00:04:07.022 EAL: Detected lcore 18 as core 21 on socket 0 00:04:07.022 EAL: Detected lcore 19 as core 25 on socket 0 00:04:07.022 EAL: Detected lcore 20 as core 26 on socket 0 00:04:07.022 EAL: Detected lcore 21 as core 27 on socket 0 00:04:07.022 EAL: Detected lcore 22 as core 28 on socket 0 00:04:07.022 EAL: Detected lcore 23 as core 29 on socket 0 00:04:07.022 EAL: Detected lcore 24 as core 0 on socket 1 00:04:07.022 EAL: Detected lcore 25 as core 1 on socket 1 00:04:07.022 EAL: Detected lcore 26 as core 2 on socket 1 00:04:07.022 EAL: Detected lcore 27 as core 3 on socket 1 00:04:07.022 EAL: Detected lcore 28 as core 4 on socket 1 00:04:07.022 EAL: Detected lcore 29 as core 5 on socket 1 00:04:07.022 EAL: Detected lcore 30 as core 6 on socket 1 00:04:07.022 EAL: Detected lcore 31 as core 9 on socket 1 00:04:07.022 EAL: Detected lcore 32 as core 10 on socket 1 00:04:07.022 EAL: Detected lcore 33 as core 11 on socket 1 00:04:07.022 EAL: Detected lcore 34 as core 12 on socket 1 00:04:07.022 EAL: Detected lcore 35 as core 13 on socket 1 00:04:07.022 EAL: Detected lcore 36 as core 16 on socket 1 00:04:07.022 EAL: Detected lcore 37 as core 17 on socket 1 00:04:07.022 EAL: Detected lcore 38 as core 18 on socket 1 00:04:07.022 EAL: Detected lcore 39 as core 19 on socket 1 00:04:07.022 EAL: Detected lcore 40 as core 20 on socket 1 00:04:07.022 EAL: Detected lcore 41 as core 21 on socket 1 00:04:07.022 EAL: Detected lcore 42 as core 24 on socket 1 00:04:07.022 EAL: Detected lcore 43 as core 25 on socket 1 00:04:07.022 EAL: Detected lcore 44 as core 26 on socket 1 00:04:07.022 EAL: Detected lcore 45 as core 27 on socket 1 00:04:07.022 EAL: Detected lcore 46 as core 28 on socket 1 00:04:07.022 EAL: Detected lcore 47 as core 29 on socket 1 00:04:07.022 EAL: Detected lcore 48 as core 0 on socket 0 00:04:07.022 EAL: Detected lcore 49 as core 1 on socket 0 00:04:07.022 EAL: Detected lcore 50 as core 2 on socket 0 00:04:07.022 EAL: Detected lcore 51 as core 3 on socket 0 00:04:07.022 EAL: Detected lcore 52 as core 4 on socket 0 00:04:07.022 EAL: Detected lcore 53 as core 5 on socket 0 00:04:07.022 EAL: Detected lcore 54 as core 6 on socket 0 00:04:07.022 EAL: Detected lcore 55 as core 8 on socket 0 00:04:07.022 EAL: Detected lcore 56 as core 9 on socket 0 00:04:07.022 EAL: Detected lcore 57 as core 10 on socket 0 00:04:07.022 EAL: Detected lcore 58 as core 11 on socket 0 00:04:07.022 EAL: Detected lcore 59 as core 12 on socket 0 00:04:07.022 EAL: Detected lcore 60 as core 13 on socket 0 00:04:07.022 EAL: Detected lcore 61 as core 16 on socket 0 00:04:07.022 EAL: Detected lcore 62 as core 17 on socket 0 00:04:07.022 EAL: Detected lcore 63 as core 18 on socket 0 00:04:07.022 EAL: Detected lcore 64 as core 19 on socket 0 00:04:07.022 EAL: Detected lcore 65 as core 20 on socket 0 00:04:07.022 EAL: Detected lcore 66 as core 21 on socket 0 00:04:07.022 EAL: Detected lcore 67 as core 25 on socket 0 00:04:07.022 EAL: Detected lcore 68 as core 26 on socket 0 00:04:07.022 EAL: Detected lcore 69 as core 27 on socket 0 00:04:07.022 EAL: Detected lcore 70 as core 28 on socket 0 00:04:07.022 EAL: Detected lcore 71 as core 29 on socket 0 00:04:07.022 EAL: Detected lcore 72 as core 0 on socket 1 00:04:07.022 EAL: Detected lcore 73 as core 1 on socket 1 00:04:07.022 EAL: Detected lcore 74 as core 2 on socket 1 00:04:07.022 EAL: Detected lcore 75 as core 3 on socket 1 00:04:07.022 EAL: Detected lcore 76 as core 4 on socket 1 00:04:07.022 EAL: Detected lcore 77 as core 5 on socket 1 00:04:07.022 EAL: Detected lcore 78 as core 6 on socket 1 00:04:07.022 EAL: Detected lcore 79 as core 9 on socket 1 00:04:07.022 EAL: Detected lcore 80 as core 10 on socket 1 00:04:07.022 EAL: Detected lcore 81 as core 11 on socket 1 00:04:07.022 EAL: Detected lcore 82 as core 12 on socket 1 00:04:07.022 EAL: Detected lcore 83 as core 13 on socket 1 00:04:07.022 EAL: Detected lcore 84 as core 16 on socket 1 00:04:07.022 EAL: Detected lcore 85 as core 17 on socket 1 00:04:07.022 EAL: Detected lcore 86 as core 18 on socket 1 00:04:07.022 EAL: Detected lcore 87 as core 19 on socket 1 00:04:07.022 EAL: Detected lcore 88 as core 20 on socket 1 00:04:07.022 EAL: Detected lcore 89 as core 21 on socket 1 00:04:07.022 EAL: Detected lcore 90 as core 24 on socket 1 00:04:07.022 EAL: Detected lcore 91 as core 25 on socket 1 00:04:07.022 EAL: Detected lcore 92 as core 26 on socket 1 00:04:07.022 EAL: Detected lcore 93 as core 27 on socket 1 00:04:07.022 EAL: Detected lcore 94 as core 28 on socket 1 00:04:07.022 EAL: Detected lcore 95 as core 29 on socket 1 00:04:07.022 EAL: Maximum logical cores by configuration: 128 00:04:07.022 EAL: Detected CPU lcores: 96 00:04:07.022 EAL: Detected NUMA nodes: 2 00:04:07.022 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.022 EAL: Detected shared linkage of DPDK 00:04:07.022 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.281 EAL: Bus pci wants IOVA as 'DC' 00:04:07.281 EAL: Buses did not request a specific IOVA mode. 00:04:07.281 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:07.281 EAL: Selected IOVA mode 'VA' 00:04:07.281 EAL: Probing VFIO support... 00:04:07.281 EAL: IOMMU type 1 (Type 1) is supported 00:04:07.281 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:07.281 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:07.281 EAL: VFIO support initialized 00:04:07.281 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.281 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.281 EAL: Setting up physically contiguous memory... 00:04:07.281 EAL: Setting maximum number of open files to 524288 00:04:07.281 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.281 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:07.281 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.281 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:07.281 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.281 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:07.281 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.281 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.281 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:07.281 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:07.281 EAL: Hugepages will be freed exactly as allocated. 00:04:07.281 EAL: No shared files mode enabled, IPC is disabled 00:04:07.281 EAL: No shared files mode enabled, IPC is disabled 00:04:07.281 EAL: TSC frequency is ~2300000 KHz 00:04:07.281 EAL: Main lcore 0 is ready (tid=7fd5ed6c2a00;cpuset=[0]) 00:04:07.281 EAL: Trying to obtain current memory policy. 00:04:07.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.281 EAL: Restoring previous memory policy: 0 00:04:07.281 EAL: request: mp_malloc_sync 00:04:07.281 EAL: No shared files mode enabled, IPC is disabled 00:04:07.281 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.281 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.282 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.282 00:04:07.282 00:04:07.282 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.282 http://cunit.sourceforge.net/ 00:04:07.282 00:04:07.282 00:04:07.282 Suite: components_suite 00:04:07.282 Test: vtophys_malloc_test ...passed 00:04:07.282 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.282 EAL: Trying to obtain current memory policy. 00:04:07.282 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.282 EAL: Restoring previous memory policy: 4 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.282 EAL: request: mp_malloc_sync 00:04:07.282 EAL: No shared files mode enabled, IPC is disabled 00:04:07.282 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.282 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.541 EAL: request: mp_malloc_sync 00:04:07.541 EAL: No shared files mode enabled, IPC is disabled 00:04:07.541 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.541 EAL: Trying to obtain current memory policy. 00:04:07.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.541 EAL: Restoring previous memory policy: 4 00:04:07.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.541 EAL: request: mp_malloc_sync 00:04:07.541 EAL: No shared files mode enabled, IPC is disabled 00:04:07.541 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.800 EAL: request: mp_malloc_sync 00:04:07.800 EAL: No shared files mode enabled, IPC is disabled 00:04:07.800 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.800 EAL: Trying to obtain current memory policy. 00:04:07.800 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.800 EAL: Restoring previous memory policy: 4 00:04:07.800 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.800 EAL: request: mp_malloc_sync 00:04:07.800 EAL: No shared files mode enabled, IPC is disabled 00:04:07.800 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.318 EAL: request: mp_malloc_sync 00:04:08.318 EAL: No shared files mode enabled, IPC is disabled 00:04:08.318 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.318 passed 00:04:08.318 00:04:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.318 suites 1 1 n/a 0 0 00:04:08.318 tests 2 2 2 0 0 00:04:08.318 asserts 497 497 497 0 n/a 00:04:08.318 00:04:08.318 Elapsed time = 0.975 seconds 00:04:08.318 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.318 EAL: request: mp_malloc_sync 00:04:08.318 EAL: No shared files mode enabled, IPC is disabled 00:04:08.318 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.318 EAL: No shared files mode enabled, IPC is disabled 00:04:08.318 EAL: No shared files mode enabled, IPC is disabled 00:04:08.318 EAL: No shared files mode enabled, IPC is disabled 00:04:08.318 00:04:08.318 real 0m1.101s 00:04:08.318 user 0m0.647s 00:04:08.318 sys 0m0.429s 00:04:08.318 12:13:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.318 12:13:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.318 ************************************ 00:04:08.318 END TEST env_vtophys 00:04:08.318 ************************************ 00:04:08.318 12:13:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.318 12:13:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.318 12:13:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.318 12:13:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.318 ************************************ 00:04:08.318 START TEST env_pci 00:04:08.318 ************************************ 00:04:08.318 12:13:51 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.318 00:04:08.318 00:04:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.318 http://cunit.sourceforge.net/ 00:04:08.318 00:04:08.318 00:04:08.318 Suite: pci 00:04:08.318 Test: pci_hook ...[2024-11-20 12:13:51.284846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 243719 has claimed it 00:04:08.318 EAL: Cannot find device (10000:00:01.0) 00:04:08.318 EAL: Failed to attach device on primary process 00:04:08.318 passed 00:04:08.318 00:04:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.318 suites 1 1 n/a 0 0 00:04:08.318 tests 1 1 1 0 0 00:04:08.318 asserts 25 25 25 0 n/a 00:04:08.318 00:04:08.318 Elapsed time = 0.029 seconds 00:04:08.318 00:04:08.318 real 0m0.046s 00:04:08.318 user 0m0.013s 00:04:08.318 sys 0m0.033s 00:04:08.318 12:13:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.318 12:13:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.318 ************************************ 00:04:08.318 END TEST env_pci 00:04:08.318 ************************************ 00:04:08.318 12:13:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.318 12:13:51 env -- env/env.sh@15 -- # uname 00:04:08.318 12:13:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.318 12:13:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.318 12:13:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.318 12:13:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:08.318 12:13:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.318 12:13:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.318 ************************************ 00:04:08.318 START TEST env_dpdk_post_init 00:04:08.318 ************************************ 00:04:08.318 12:13:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.318 EAL: Detected CPU lcores: 96 00:04:08.318 EAL: Detected NUMA nodes: 2 00:04:08.318 EAL: Detected shared linkage of DPDK 00:04:08.318 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.577 EAL: Selected IOVA mode 'VA' 00:04:08.577 EAL: VFIO support initialized 00:04:08.577 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.577 EAL: Using IOMMU type 1 (Type 1) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:08.577 EAL: Ignore mapping IO port bar(1) 00:04:08.577 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:09.514 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:09.514 EAL: Ignore mapping IO port bar(1) 00:04:09.514 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:12.801 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:12.801 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:12.801 Starting DPDK initialization... 00:04:12.801 Starting SPDK post initialization... 00:04:12.801 SPDK NVMe probe 00:04:12.801 Attaching to 0000:5e:00.0 00:04:12.801 Attached to 0000:5e:00.0 00:04:12.801 Cleaning up... 00:04:12.801 00:04:12.801 real 0m4.415s 00:04:12.801 user 0m3.033s 00:04:12.801 sys 0m0.451s 00:04:12.801 12:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.801 12:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.801 ************************************ 00:04:12.801 END TEST env_dpdk_post_init 00:04:12.801 ************************************ 00:04:12.801 12:13:55 env -- env/env.sh@26 -- # uname 00:04:12.801 12:13:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.801 12:13:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.801 12:13:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.801 12:13:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.801 12:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.801 ************************************ 00:04:12.801 START TEST env_mem_callbacks 00:04:12.801 ************************************ 00:04:12.801 12:13:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.801 EAL: Detected CPU lcores: 96 00:04:12.801 EAL: Detected NUMA nodes: 2 00:04:12.801 EAL: Detected shared linkage of DPDK 00:04:12.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.801 EAL: Selected IOVA mode 'VA' 00:04:12.801 EAL: VFIO support initialized 00:04:12.801 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.801 00:04:12.801 00:04:12.801 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.801 http://cunit.sourceforge.net/ 00:04:12.801 00:04:12.801 00:04:12.801 Suite: memory 00:04:12.801 Test: test ... 00:04:12.801 register 0x200000200000 2097152 00:04:12.801 malloc 3145728 00:04:12.801 register 0x200000400000 4194304 00:04:12.801 buf 0x200000500000 len 3145728 PASSED 00:04:12.801 malloc 64 00:04:12.801 buf 0x2000004fff40 len 64 PASSED 00:04:12.801 malloc 4194304 00:04:13.060 register 0x200000800000 6291456 00:04:13.060 buf 0x200000a00000 len 4194304 PASSED 00:04:13.060 free 0x200000500000 3145728 00:04:13.060 free 0x2000004fff40 64 00:04:13.060 unregister 0x200000400000 4194304 PASSED 00:04:13.060 free 0x200000a00000 4194304 00:04:13.060 unregister 0x200000800000 6291456 PASSED 00:04:13.060 malloc 8388608 00:04:13.060 register 0x200000400000 10485760 00:04:13.060 buf 0x200000600000 len 8388608 PASSED 00:04:13.060 free 0x200000600000 8388608 00:04:13.060 unregister 0x200000400000 10485760 PASSED 00:04:13.060 passed 00:04:13.060 00:04:13.060 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.060 suites 1 1 n/a 0 0 00:04:13.060 tests 1 1 1 0 0 00:04:13.060 asserts 15 15 15 0 n/a 00:04:13.060 00:04:13.060 Elapsed time = 0.007 seconds 00:04:13.060 00:04:13.060 real 0m0.048s 00:04:13.060 user 0m0.008s 00:04:13.060 sys 0m0.040s 00:04:13.060 12:13:55 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.060 12:13:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.060 ************************************ 00:04:13.060 END TEST env_mem_callbacks 00:04:13.060 ************************************ 00:04:13.060 00:04:13.060 real 0m6.282s 00:04:13.060 user 0m4.094s 00:04:13.060 sys 0m1.266s 00:04:13.060 12:13:55 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.060 12:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.060 ************************************ 00:04:13.060 END TEST env 00:04:13.060 ************************************ 00:04:13.060 12:13:55 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:13.060 12:13:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.060 12:13:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.060 12:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:13.060 ************************************ 00:04:13.060 START TEST rpc 00:04:13.060 ************************************ 00:04:13.060 12:13:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:13.060 * Looking for test storage... 00:04:13.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.060 12:13:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.060 12:13:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:13.060 12:13:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.320 12:13:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.320 12:13:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.320 12:13:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.320 12:13:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.320 12:13:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.320 12:13:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.320 12:13:56 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.320 12:13:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.320 12:13:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.320 12:13:56 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.320 12:13:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.320 12:13:56 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.320 12:13:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.320 12:13:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.320 12:13:56 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.320 --rc genhtml_branch_coverage=1 00:04:13.320 --rc genhtml_function_coverage=1 00:04:13.320 --rc genhtml_legend=1 00:04:13.320 --rc geninfo_all_blocks=1 00:04:13.320 --rc geninfo_unexecuted_blocks=1 00:04:13.320 00:04:13.320 ' 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.320 --rc genhtml_branch_coverage=1 00:04:13.320 --rc genhtml_function_coverage=1 00:04:13.320 --rc genhtml_legend=1 00:04:13.320 --rc geninfo_all_blocks=1 00:04:13.320 --rc geninfo_unexecuted_blocks=1 00:04:13.320 00:04:13.320 ' 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.320 --rc genhtml_branch_coverage=1 00:04:13.320 --rc genhtml_function_coverage=1 00:04:13.320 --rc genhtml_legend=1 00:04:13.320 --rc geninfo_all_blocks=1 00:04:13.320 --rc geninfo_unexecuted_blocks=1 00:04:13.320 00:04:13.320 ' 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.320 --rc genhtml_branch_coverage=1 00:04:13.320 --rc genhtml_function_coverage=1 00:04:13.320 --rc genhtml_legend=1 00:04:13.320 --rc geninfo_all_blocks=1 00:04:13.320 --rc geninfo_unexecuted_blocks=1 00:04:13.320 00:04:13.320 ' 00:04:13.320 12:13:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:13.320 12:13:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=244644 00:04:13.320 12:13:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.320 12:13:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 244644 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 244644 ']' 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.320 12:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.320 [2024-11-20 12:13:56.255008] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:13.320 [2024-11-20 12:13:56.255053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244644 ] 00:04:13.320 [2024-11-20 12:13:56.330267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.320 [2024-11-20 12:13:56.372264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.320 [2024-11-20 12:13:56.372298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 244644' to capture a snapshot of events at runtime. 00:04:13.320 [2024-11-20 12:13:56.372305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.320 [2024-11-20 12:13:56.372312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.320 [2024-11-20 12:13:56.372317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid244644 for offline analysis/debug. 00:04:13.320 [2024-11-20 12:13:56.372900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.580 12:13:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.580 12:13:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.580 12:13:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.580 12:13:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.580 12:13:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.580 12:13:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.580 12:13:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.580 12:13:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.580 12:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.580 ************************************ 00:04:13.580 START TEST rpc_integrity 00:04:13.580 ************************************ 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.580 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.580 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.840 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.840 { 00:04:13.840 "name": "Malloc0", 00:04:13.840 "aliases": [ 00:04:13.840 "999ff58f-0a07-43ba-99b4-59f5027af4f1" 00:04:13.840 ], 00:04:13.840 "product_name": "Malloc disk", 00:04:13.840 "block_size": 512, 00:04:13.840 "num_blocks": 16384, 00:04:13.840 "uuid": "999ff58f-0a07-43ba-99b4-59f5027af4f1", 00:04:13.840 "assigned_rate_limits": { 00:04:13.840 "rw_ios_per_sec": 0, 00:04:13.840 "rw_mbytes_per_sec": 0, 00:04:13.840 "r_mbytes_per_sec": 0, 00:04:13.840 "w_mbytes_per_sec": 0 00:04:13.840 }, 00:04:13.840 "claimed": false, 00:04:13.840 "zoned": false, 00:04:13.840 "supported_io_types": { 00:04:13.840 "read": true, 00:04:13.840 "write": true, 00:04:13.840 "unmap": true, 00:04:13.840 "flush": true, 00:04:13.840 "reset": true, 00:04:13.840 "nvme_admin": false, 00:04:13.840 "nvme_io": false, 00:04:13.840 "nvme_io_md": false, 00:04:13.840 "write_zeroes": true, 00:04:13.840 "zcopy": true, 00:04:13.840 "get_zone_info": false, 00:04:13.840 "zone_management": false, 00:04:13.840 "zone_append": false, 00:04:13.840 "compare": false, 00:04:13.840 "compare_and_write": false, 00:04:13.840 "abort": true, 00:04:13.840 "seek_hole": false, 00:04:13.840 "seek_data": false, 00:04:13.840 "copy": true, 00:04:13.840 "nvme_iov_md": false 00:04:13.840 }, 00:04:13.840 "memory_domains": [ 00:04:13.840 { 00:04:13.840 "dma_device_id": "system", 00:04:13.840 "dma_device_type": 1 00:04:13.840 }, 00:04:13.840 { 00:04:13.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.840 "dma_device_type": 2 00:04:13.840 } 00:04:13.840 ], 00:04:13.840 "driver_specific": {} 00:04:13.840 } 00:04:13.840 ]' 00:04:13.840 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.840 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.840 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.840 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.840 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.840 [2024-11-20 12:13:56.744094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.840 [2024-11-20 12:13:56.744122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.840 [2024-11-20 12:13:56.744134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16366e0 00:04:13.840 [2024-11-20 12:13:56.744140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.840 [2024-11-20 12:13:56.745260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.840 [2024-11-20 12:13:56.745281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.840 Passthru0 00:04:13.840 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.840 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.840 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.841 { 00:04:13.841 "name": "Malloc0", 00:04:13.841 "aliases": [ 00:04:13.841 "999ff58f-0a07-43ba-99b4-59f5027af4f1" 00:04:13.841 ], 00:04:13.841 "product_name": "Malloc disk", 00:04:13.841 "block_size": 512, 00:04:13.841 "num_blocks": 16384, 00:04:13.841 "uuid": "999ff58f-0a07-43ba-99b4-59f5027af4f1", 00:04:13.841 "assigned_rate_limits": { 00:04:13.841 "rw_ios_per_sec": 0, 00:04:13.841 "rw_mbytes_per_sec": 0, 00:04:13.841 "r_mbytes_per_sec": 0, 00:04:13.841 "w_mbytes_per_sec": 0 00:04:13.841 }, 00:04:13.841 "claimed": true, 00:04:13.841 "claim_type": "exclusive_write", 00:04:13.841 "zoned": false, 00:04:13.841 "supported_io_types": { 00:04:13.841 "read": true, 00:04:13.841 "write": true, 00:04:13.841 "unmap": true, 00:04:13.841 "flush": true, 00:04:13.841 "reset": true, 00:04:13.841 "nvme_admin": false, 00:04:13.841 "nvme_io": false, 00:04:13.841 "nvme_io_md": false, 00:04:13.841 "write_zeroes": true, 00:04:13.841 "zcopy": true, 00:04:13.841 "get_zone_info": false, 00:04:13.841 "zone_management": false, 00:04:13.841 "zone_append": false, 00:04:13.841 "compare": false, 00:04:13.841 "compare_and_write": false, 00:04:13.841 "abort": true, 00:04:13.841 "seek_hole": false, 00:04:13.841 "seek_data": false, 00:04:13.841 "copy": true, 00:04:13.841 "nvme_iov_md": false 00:04:13.841 }, 00:04:13.841 "memory_domains": [ 00:04:13.841 { 00:04:13.841 "dma_device_id": "system", 00:04:13.841 "dma_device_type": 1 00:04:13.841 }, 00:04:13.841 { 00:04:13.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.841 "dma_device_type": 2 00:04:13.841 } 00:04:13.841 ], 00:04:13.841 "driver_specific": {} 00:04:13.841 }, 00:04:13.841 { 00:04:13.841 "name": "Passthru0", 00:04:13.841 "aliases": [ 00:04:13.841 "ba5008e7-a091-575e-b8ba-9bf3a3ab5785" 00:04:13.841 ], 00:04:13.841 "product_name": "passthru", 00:04:13.841 "block_size": 512, 00:04:13.841 "num_blocks": 16384, 00:04:13.841 "uuid": "ba5008e7-a091-575e-b8ba-9bf3a3ab5785", 00:04:13.841 "assigned_rate_limits": { 00:04:13.841 "rw_ios_per_sec": 0, 00:04:13.841 "rw_mbytes_per_sec": 0, 00:04:13.841 "r_mbytes_per_sec": 0, 00:04:13.841 "w_mbytes_per_sec": 0 00:04:13.841 }, 00:04:13.841 "claimed": false, 00:04:13.841 "zoned": false, 00:04:13.841 "supported_io_types": { 00:04:13.841 "read": true, 00:04:13.841 "write": true, 00:04:13.841 "unmap": true, 00:04:13.841 "flush": true, 00:04:13.841 "reset": true, 00:04:13.841 "nvme_admin": false, 00:04:13.841 "nvme_io": false, 00:04:13.841 "nvme_io_md": false, 00:04:13.841 "write_zeroes": true, 00:04:13.841 "zcopy": true, 00:04:13.841 "get_zone_info": false, 00:04:13.841 "zone_management": false, 00:04:13.841 "zone_append": false, 00:04:13.841 "compare": false, 00:04:13.841 "compare_and_write": false, 00:04:13.841 "abort": true, 00:04:13.841 "seek_hole": false, 00:04:13.841 "seek_data": false, 00:04:13.841 "copy": true, 00:04:13.841 "nvme_iov_md": false 00:04:13.841 }, 00:04:13.841 "memory_domains": [ 00:04:13.841 { 00:04:13.841 "dma_device_id": "system", 00:04:13.841 "dma_device_type": 1 00:04:13.841 }, 00:04:13.841 { 00:04:13.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.841 "dma_device_type": 2 00:04:13.841 } 00:04:13.841 ], 00:04:13.841 "driver_specific": { 00:04:13.841 "passthru": { 00:04:13.841 "name": "Passthru0", 00:04:13.841 "base_bdev_name": "Malloc0" 00:04:13.841 } 00:04:13.841 } 00:04:13.841 } 00:04:13.841 ]' 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.841 12:13:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.841 00:04:13.841 real 0m0.268s 00:04:13.841 user 0m0.162s 00:04:13.841 sys 0m0.039s 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.841 12:13:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.841 ************************************ 00:04:13.841 END TEST rpc_integrity 00:04:13.841 ************************************ 00:04:13.841 12:13:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.841 12:13:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.841 12:13:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.841 12:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.841 ************************************ 00:04:13.841 START TEST rpc_plugins 00:04:13.841 ************************************ 00:04:13.841 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:13.841 12:13:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.841 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.841 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.100 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.100 12:13:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.100 12:13:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.100 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.100 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.100 12:13:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.100 12:13:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.100 { 00:04:14.100 "name": "Malloc1", 00:04:14.100 "aliases": [ 00:04:14.100 "39efd4ec-2949-4bbd-a1fb-6ee58c854adc" 00:04:14.100 ], 00:04:14.100 "product_name": "Malloc disk", 00:04:14.100 "block_size": 4096, 00:04:14.100 "num_blocks": 256, 00:04:14.100 "uuid": "39efd4ec-2949-4bbd-a1fb-6ee58c854adc", 00:04:14.100 "assigned_rate_limits": { 00:04:14.100 "rw_ios_per_sec": 0, 00:04:14.100 "rw_mbytes_per_sec": 0, 00:04:14.100 "r_mbytes_per_sec": 0, 00:04:14.100 "w_mbytes_per_sec": 0 00:04:14.100 }, 00:04:14.100 "claimed": false, 00:04:14.100 "zoned": false, 00:04:14.100 "supported_io_types": { 00:04:14.100 "read": true, 00:04:14.100 "write": true, 00:04:14.100 "unmap": true, 00:04:14.100 "flush": true, 00:04:14.100 "reset": true, 00:04:14.100 "nvme_admin": false, 00:04:14.100 "nvme_io": false, 00:04:14.100 "nvme_io_md": false, 00:04:14.100 "write_zeroes": true, 00:04:14.100 "zcopy": true, 00:04:14.100 "get_zone_info": false, 00:04:14.100 "zone_management": false, 00:04:14.100 "zone_append": false, 00:04:14.100 "compare": false, 00:04:14.100 "compare_and_write": false, 00:04:14.100 "abort": true, 00:04:14.100 "seek_hole": false, 00:04:14.100 "seek_data": false, 00:04:14.100 "copy": true, 00:04:14.100 "nvme_iov_md": false 00:04:14.100 }, 00:04:14.100 "memory_domains": [ 00:04:14.100 { 00:04:14.100 "dma_device_id": "system", 00:04:14.100 "dma_device_type": 1 00:04:14.100 }, 00:04:14.100 { 00:04:14.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.101 "dma_device_type": 2 00:04:14.101 } 00:04:14.101 ], 00:04:14.101 "driver_specific": {} 00:04:14.101 } 00:04:14.101 ]' 00:04:14.101 12:13:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.101 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.101 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.101 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.101 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.101 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.101 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.101 00:04:14.101 real 0m0.142s 00:04:14.101 user 0m0.086s 00:04:14.101 sys 0m0.019s 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.101 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.101 ************************************ 00:04:14.101 END TEST rpc_plugins 00:04:14.101 ************************************ 00:04:14.101 12:13:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.101 12:13:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.101 12:13:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.101 12:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.101 ************************************ 00:04:14.101 START TEST rpc_trace_cmd_test 00:04:14.101 ************************************ 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.101 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid244644", 00:04:14.101 "tpoint_group_mask": "0x8", 00:04:14.101 "iscsi_conn": { 00:04:14.101 "mask": "0x2", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "scsi": { 00:04:14.101 "mask": "0x4", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "bdev": { 00:04:14.101 "mask": "0x8", 00:04:14.101 "tpoint_mask": "0xffffffffffffffff" 00:04:14.101 }, 00:04:14.101 "nvmf_rdma": { 00:04:14.101 "mask": "0x10", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "nvmf_tcp": { 00:04:14.101 "mask": "0x20", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "ftl": { 00:04:14.101 "mask": "0x40", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "blobfs": { 00:04:14.101 "mask": "0x80", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "dsa": { 00:04:14.101 "mask": "0x200", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "thread": { 00:04:14.101 "mask": "0x400", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "nvme_pcie": { 00:04:14.101 "mask": "0x800", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "iaa": { 00:04:14.101 "mask": "0x1000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "nvme_tcp": { 00:04:14.101 "mask": "0x2000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "bdev_nvme": { 00:04:14.101 "mask": "0x4000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "sock": { 00:04:14.101 "mask": "0x8000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "blob": { 00:04:14.101 "mask": "0x10000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "bdev_raid": { 00:04:14.101 "mask": "0x20000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 }, 00:04:14.101 "scheduler": { 00:04:14.101 "mask": "0x40000", 00:04:14.101 "tpoint_mask": "0x0" 00:04:14.101 } 00:04:14.101 }' 00:04:14.101 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.360 00:04:14.360 real 0m0.222s 00:04:14.360 user 0m0.189s 00:04:14.360 sys 0m0.026s 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.360 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.360 ************************************ 00:04:14.360 END TEST rpc_trace_cmd_test 00:04:14.360 ************************************ 00:04:14.360 12:13:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.360 12:13:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.360 12:13:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.360 12:13:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.360 12:13:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.360 12:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.360 ************************************ 00:04:14.360 START TEST rpc_daemon_integrity 00:04:14.360 ************************************ 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.360 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.620 { 00:04:14.620 "name": "Malloc2", 00:04:14.620 "aliases": [ 00:04:14.620 "d679deec-1b6b-4645-95eb-649b24020f0a" 00:04:14.620 ], 00:04:14.620 "product_name": "Malloc disk", 00:04:14.620 "block_size": 512, 00:04:14.620 "num_blocks": 16384, 00:04:14.620 "uuid": "d679deec-1b6b-4645-95eb-649b24020f0a", 00:04:14.620 "assigned_rate_limits": { 00:04:14.620 "rw_ios_per_sec": 0, 00:04:14.620 "rw_mbytes_per_sec": 0, 00:04:14.620 "r_mbytes_per_sec": 0, 00:04:14.620 "w_mbytes_per_sec": 0 00:04:14.620 }, 00:04:14.620 "claimed": false, 00:04:14.620 "zoned": false, 00:04:14.620 "supported_io_types": { 00:04:14.620 "read": true, 00:04:14.620 "write": true, 00:04:14.620 "unmap": true, 00:04:14.620 "flush": true, 00:04:14.620 "reset": true, 00:04:14.620 "nvme_admin": false, 00:04:14.620 "nvme_io": false, 00:04:14.620 "nvme_io_md": false, 00:04:14.620 "write_zeroes": true, 00:04:14.620 "zcopy": true, 00:04:14.620 "get_zone_info": false, 00:04:14.620 "zone_management": false, 00:04:14.620 "zone_append": false, 00:04:14.620 "compare": false, 00:04:14.620 "compare_and_write": false, 00:04:14.620 "abort": true, 00:04:14.620 "seek_hole": false, 00:04:14.620 "seek_data": false, 00:04:14.620 "copy": true, 00:04:14.620 "nvme_iov_md": false 00:04:14.620 }, 00:04:14.620 "memory_domains": [ 00:04:14.620 { 00:04:14.620 "dma_device_id": "system", 00:04:14.620 "dma_device_type": 1 00:04:14.620 }, 00:04:14.620 { 00:04:14.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.620 "dma_device_type": 2 00:04:14.620 } 00:04:14.620 ], 00:04:14.620 "driver_specific": {} 00:04:14.620 } 00:04:14.620 ]' 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 [2024-11-20 12:13:57.582368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.620 [2024-11-20 12:13:57.582394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.620 [2024-11-20 12:13:57.582405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16c6b70 00:04:14.620 [2024-11-20 12:13:57.582411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.620 [2024-11-20 12:13:57.583389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.620 [2024-11-20 12:13:57.583409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.620 Passthru0 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.620 { 00:04:14.620 "name": "Malloc2", 00:04:14.620 "aliases": [ 00:04:14.620 "d679deec-1b6b-4645-95eb-649b24020f0a" 00:04:14.620 ], 00:04:14.620 "product_name": "Malloc disk", 00:04:14.620 "block_size": 512, 00:04:14.620 "num_blocks": 16384, 00:04:14.620 "uuid": "d679deec-1b6b-4645-95eb-649b24020f0a", 00:04:14.620 "assigned_rate_limits": { 00:04:14.620 "rw_ios_per_sec": 0, 00:04:14.620 "rw_mbytes_per_sec": 0, 00:04:14.620 "r_mbytes_per_sec": 0, 00:04:14.620 "w_mbytes_per_sec": 0 00:04:14.620 }, 00:04:14.620 "claimed": true, 00:04:14.620 "claim_type": "exclusive_write", 00:04:14.620 "zoned": false, 00:04:14.620 "supported_io_types": { 00:04:14.620 "read": true, 00:04:14.620 "write": true, 00:04:14.620 "unmap": true, 00:04:14.620 "flush": true, 00:04:14.620 "reset": true, 00:04:14.620 "nvme_admin": false, 00:04:14.620 "nvme_io": false, 00:04:14.620 "nvme_io_md": false, 00:04:14.620 "write_zeroes": true, 00:04:14.620 "zcopy": true, 00:04:14.620 "get_zone_info": false, 00:04:14.620 "zone_management": false, 00:04:14.620 "zone_append": false, 00:04:14.620 "compare": false, 00:04:14.620 "compare_and_write": false, 00:04:14.620 "abort": true, 00:04:14.620 "seek_hole": false, 00:04:14.620 "seek_data": false, 00:04:14.620 "copy": true, 00:04:14.620 "nvme_iov_md": false 00:04:14.620 }, 00:04:14.620 "memory_domains": [ 00:04:14.620 { 00:04:14.620 "dma_device_id": "system", 00:04:14.620 "dma_device_type": 1 00:04:14.620 }, 00:04:14.620 { 00:04:14.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.620 "dma_device_type": 2 00:04:14.620 } 00:04:14.620 ], 00:04:14.620 "driver_specific": {} 00:04:14.620 }, 00:04:14.620 { 00:04:14.620 "name": "Passthru0", 00:04:14.620 "aliases": [ 00:04:14.620 "42dfc7dc-8bc7-56eb-b430-e28cfd2c0915" 00:04:14.620 ], 00:04:14.620 "product_name": "passthru", 00:04:14.620 "block_size": 512, 00:04:14.620 "num_blocks": 16384, 00:04:14.620 "uuid": "42dfc7dc-8bc7-56eb-b430-e28cfd2c0915", 00:04:14.620 "assigned_rate_limits": { 00:04:14.620 "rw_ios_per_sec": 0, 00:04:14.620 "rw_mbytes_per_sec": 0, 00:04:14.620 "r_mbytes_per_sec": 0, 00:04:14.620 "w_mbytes_per_sec": 0 00:04:14.620 }, 00:04:14.620 "claimed": false, 00:04:14.620 "zoned": false, 00:04:14.620 "supported_io_types": { 00:04:14.620 "read": true, 00:04:14.620 "write": true, 00:04:14.620 "unmap": true, 00:04:14.620 "flush": true, 00:04:14.620 "reset": true, 00:04:14.620 "nvme_admin": false, 00:04:14.620 "nvme_io": false, 00:04:14.620 "nvme_io_md": false, 00:04:14.620 "write_zeroes": true, 00:04:14.620 "zcopy": true, 00:04:14.620 "get_zone_info": false, 00:04:14.620 "zone_management": false, 00:04:14.620 "zone_append": false, 00:04:14.620 "compare": false, 00:04:14.620 "compare_and_write": false, 00:04:14.620 "abort": true, 00:04:14.620 "seek_hole": false, 00:04:14.620 "seek_data": false, 00:04:14.620 "copy": true, 00:04:14.620 "nvme_iov_md": false 00:04:14.620 }, 00:04:14.620 "memory_domains": [ 00:04:14.620 { 00:04:14.620 "dma_device_id": "system", 00:04:14.620 "dma_device_type": 1 00:04:14.620 }, 00:04:14.620 { 00:04:14.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.620 "dma_device_type": 2 00:04:14.620 } 00:04:14.620 ], 00:04:14.620 "driver_specific": { 00:04:14.620 "passthru": { 00:04:14.620 "name": "Passthru0", 00:04:14.620 "base_bdev_name": "Malloc2" 00:04:14.620 } 00:04:14.620 } 00:04:14.620 } 00:04:14.620 ]' 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.620 00:04:14.620 real 0m0.268s 00:04:14.620 user 0m0.162s 00:04:14.620 sys 0m0.038s 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.620 12:13:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.620 ************************************ 00:04:14.620 END TEST rpc_daemon_integrity 00:04:14.620 ************************************ 00:04:14.879 12:13:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.879 12:13:57 rpc -- rpc/rpc.sh@84 -- # killprocess 244644 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 244644 ']' 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@958 -- # kill -0 244644 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244644 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244644' 00:04:14.879 killing process with pid 244644 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@973 -- # kill 244644 00:04:14.879 12:13:57 rpc -- common/autotest_common.sh@978 -- # wait 244644 00:04:15.138 00:04:15.138 real 0m2.073s 00:04:15.138 user 0m2.622s 00:04:15.138 sys 0m0.708s 00:04:15.138 12:13:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.138 12:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.138 ************************************ 00:04:15.138 END TEST rpc 00:04:15.138 ************************************ 00:04:15.138 12:13:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.138 12:13:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.138 12:13:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.138 12:13:58 -- common/autotest_common.sh@10 -- # set +x 00:04:15.138 ************************************ 00:04:15.138 START TEST skip_rpc 00:04:15.139 ************************************ 00:04:15.139 12:13:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.398 * Looking for test storage... 00:04:15.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.398 12:13:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.398 --rc genhtml_branch_coverage=1 00:04:15.398 --rc genhtml_function_coverage=1 00:04:15.398 --rc genhtml_legend=1 00:04:15.398 --rc geninfo_all_blocks=1 00:04:15.398 --rc geninfo_unexecuted_blocks=1 00:04:15.398 00:04:15.398 ' 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.398 --rc genhtml_branch_coverage=1 00:04:15.398 --rc genhtml_function_coverage=1 00:04:15.398 --rc genhtml_legend=1 00:04:15.398 --rc geninfo_all_blocks=1 00:04:15.398 --rc geninfo_unexecuted_blocks=1 00:04:15.398 00:04:15.398 ' 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.398 --rc genhtml_branch_coverage=1 00:04:15.398 --rc genhtml_function_coverage=1 00:04:15.398 --rc genhtml_legend=1 00:04:15.398 --rc geninfo_all_blocks=1 00:04:15.398 --rc geninfo_unexecuted_blocks=1 00:04:15.398 00:04:15.398 ' 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.398 --rc genhtml_branch_coverage=1 00:04:15.398 --rc genhtml_function_coverage=1 00:04:15.398 --rc genhtml_legend=1 00:04:15.398 --rc geninfo_all_blocks=1 00:04:15.398 --rc geninfo_unexecuted_blocks=1 00:04:15.398 00:04:15.398 ' 00:04:15.398 12:13:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.398 12:13:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:15.398 12:13:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.398 12:13:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.398 ************************************ 00:04:15.398 START TEST skip_rpc 00:04:15.398 ************************************ 00:04:15.398 12:13:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:15.398 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=245279 00:04:15.398 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.398 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:15.398 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:15.398 [2024-11-20 12:13:58.442951] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:15.398 [2024-11-20 12:13:58.442989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245279 ] 00:04:15.657 [2024-11-20 12:13:58.517100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.657 [2024-11-20 12:13:58.557402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 245279 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 245279 ']' 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 245279 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 245279 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 245279' 00:04:20.929 killing process with pid 245279 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 245279 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 245279 00:04:20.929 00:04:20.929 real 0m5.366s 00:04:20.929 user 0m5.126s 00:04:20.929 sys 0m0.277s 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.929 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.929 ************************************ 00:04:20.929 END TEST skip_rpc 00:04:20.929 ************************************ 00:04:20.929 12:14:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.929 12:14:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.929 12:14:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.929 12:14:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.929 ************************************ 00:04:20.929 START TEST skip_rpc_with_json 00:04:20.929 ************************************ 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=246225 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 246225 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 246225 ']' 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.929 12:14:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.929 [2024-11-20 12:14:03.882414] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:20.930 [2024-11-20 12:14:03.882460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246225 ] 00:04:20.930 [2024-11-20 12:14:03.957258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.930 [2024-11-20 12:14:03.994788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.189 [2024-11-20 12:14:04.219003] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.189 request: 00:04:21.189 { 00:04:21.189 "trtype": "tcp", 00:04:21.189 "method": "nvmf_get_transports", 00:04:21.189 "req_id": 1 00:04:21.189 } 00:04:21.189 Got JSON-RPC error response 00:04:21.189 response: 00:04:21.189 { 00:04:21.189 "code": -19, 00:04:21.189 "message": "No such device" 00:04:21.189 } 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.189 [2024-11-20 12:14:04.227104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.189 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.449 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.449 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.449 { 00:04:21.449 "subsystems": [ 00:04:21.449 { 00:04:21.449 "subsystem": "fsdev", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "fsdev_set_opts", 00:04:21.449 "params": { 00:04:21.449 "fsdev_io_pool_size": 65535, 00:04:21.449 "fsdev_io_cache_size": 256 00:04:21.449 } 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "vfio_user_target", 00:04:21.449 "config": null 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "keyring", 00:04:21.449 "config": [] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "iobuf", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "iobuf_set_options", 00:04:21.449 "params": { 00:04:21.449 "small_pool_count": 8192, 00:04:21.449 "large_pool_count": 1024, 00:04:21.449 "small_bufsize": 8192, 00:04:21.449 "large_bufsize": 135168, 00:04:21.449 "enable_numa": false 00:04:21.449 } 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "sock", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "sock_set_default_impl", 00:04:21.449 "params": { 00:04:21.449 "impl_name": "posix" 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "sock_impl_set_options", 00:04:21.449 "params": { 00:04:21.449 "impl_name": "ssl", 00:04:21.449 "recv_buf_size": 4096, 00:04:21.449 "send_buf_size": 4096, 00:04:21.449 "enable_recv_pipe": true, 00:04:21.449 "enable_quickack": false, 00:04:21.449 "enable_placement_id": 0, 00:04:21.449 "enable_zerocopy_send_server": true, 00:04:21.449 "enable_zerocopy_send_client": false, 00:04:21.449 "zerocopy_threshold": 0, 00:04:21.449 "tls_version": 0, 00:04:21.449 "enable_ktls": false 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "sock_impl_set_options", 00:04:21.449 "params": { 00:04:21.449 "impl_name": "posix", 00:04:21.449 "recv_buf_size": 2097152, 00:04:21.449 "send_buf_size": 2097152, 00:04:21.449 "enable_recv_pipe": true, 00:04:21.449 "enable_quickack": false, 00:04:21.449 "enable_placement_id": 0, 00:04:21.449 "enable_zerocopy_send_server": true, 00:04:21.449 "enable_zerocopy_send_client": false, 00:04:21.449 "zerocopy_threshold": 0, 00:04:21.449 "tls_version": 0, 00:04:21.449 "enable_ktls": false 00:04:21.449 } 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "vmd", 00:04:21.449 "config": [] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "accel", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "accel_set_options", 00:04:21.449 "params": { 00:04:21.449 "small_cache_size": 128, 00:04:21.449 "large_cache_size": 16, 00:04:21.449 "task_count": 2048, 00:04:21.449 "sequence_count": 2048, 00:04:21.449 "buf_count": 2048 00:04:21.449 } 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "bdev", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "bdev_set_options", 00:04:21.449 "params": { 00:04:21.449 "bdev_io_pool_size": 65535, 00:04:21.449 "bdev_io_cache_size": 256, 00:04:21.449 "bdev_auto_examine": true, 00:04:21.449 "iobuf_small_cache_size": 128, 00:04:21.449 "iobuf_large_cache_size": 16 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "bdev_raid_set_options", 00:04:21.449 "params": { 00:04:21.449 "process_window_size_kb": 1024, 00:04:21.449 "process_max_bandwidth_mb_sec": 0 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "bdev_iscsi_set_options", 00:04:21.449 "params": { 00:04:21.449 "timeout_sec": 30 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "bdev_nvme_set_options", 00:04:21.449 "params": { 00:04:21.449 "action_on_timeout": "none", 00:04:21.449 "timeout_us": 0, 00:04:21.449 "timeout_admin_us": 0, 00:04:21.449 "keep_alive_timeout_ms": 10000, 00:04:21.449 "arbitration_burst": 0, 00:04:21.449 "low_priority_weight": 0, 00:04:21.449 "medium_priority_weight": 0, 00:04:21.449 "high_priority_weight": 0, 00:04:21.449 "nvme_adminq_poll_period_us": 10000, 00:04:21.449 "nvme_ioq_poll_period_us": 0, 00:04:21.449 "io_queue_requests": 0, 00:04:21.449 "delay_cmd_submit": true, 00:04:21.449 "transport_retry_count": 4, 00:04:21.449 "bdev_retry_count": 3, 00:04:21.449 "transport_ack_timeout": 0, 00:04:21.449 "ctrlr_loss_timeout_sec": 0, 00:04:21.449 "reconnect_delay_sec": 0, 00:04:21.449 "fast_io_fail_timeout_sec": 0, 00:04:21.449 "disable_auto_failback": false, 00:04:21.449 "generate_uuids": false, 00:04:21.449 "transport_tos": 0, 00:04:21.449 "nvme_error_stat": false, 00:04:21.449 "rdma_srq_size": 0, 00:04:21.449 "io_path_stat": false, 00:04:21.449 "allow_accel_sequence": false, 00:04:21.449 "rdma_max_cq_size": 0, 00:04:21.449 "rdma_cm_event_timeout_ms": 0, 00:04:21.449 "dhchap_digests": [ 00:04:21.449 "sha256", 00:04:21.449 "sha384", 00:04:21.449 "sha512" 00:04:21.449 ], 00:04:21.449 "dhchap_dhgroups": [ 00:04:21.449 "null", 00:04:21.449 "ffdhe2048", 00:04:21.449 "ffdhe3072", 00:04:21.449 "ffdhe4096", 00:04:21.449 "ffdhe6144", 00:04:21.449 "ffdhe8192" 00:04:21.449 ] 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "bdev_nvme_set_hotplug", 00:04:21.449 "params": { 00:04:21.449 "period_us": 100000, 00:04:21.449 "enable": false 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "bdev_wait_for_examine" 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "scsi", 00:04:21.449 "config": null 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "scheduler", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "framework_set_scheduler", 00:04:21.449 "params": { 00:04:21.449 "name": "static" 00:04:21.449 } 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "vhost_scsi", 00:04:21.449 "config": [] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "vhost_blk", 00:04:21.449 "config": [] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "ublk", 00:04:21.449 "config": [] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "nbd", 00:04:21.449 "config": [] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "nvmf", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "nvmf_set_config", 00:04:21.449 "params": { 00:04:21.449 "discovery_filter": "match_any", 00:04:21.449 "admin_cmd_passthru": { 00:04:21.449 "identify_ctrlr": false 00:04:21.449 }, 00:04:21.449 "dhchap_digests": [ 00:04:21.449 "sha256", 00:04:21.449 "sha384", 00:04:21.449 "sha512" 00:04:21.449 ], 00:04:21.449 "dhchap_dhgroups": [ 00:04:21.449 "null", 00:04:21.449 "ffdhe2048", 00:04:21.449 "ffdhe3072", 00:04:21.449 "ffdhe4096", 00:04:21.449 "ffdhe6144", 00:04:21.449 "ffdhe8192" 00:04:21.449 ] 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "nvmf_set_max_subsystems", 00:04:21.449 "params": { 00:04:21.449 "max_subsystems": 1024 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "nvmf_set_crdt", 00:04:21.449 "params": { 00:04:21.449 "crdt1": 0, 00:04:21.449 "crdt2": 0, 00:04:21.449 "crdt3": 0 00:04:21.449 } 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "method": "nvmf_create_transport", 00:04:21.449 "params": { 00:04:21.449 "trtype": "TCP", 00:04:21.449 "max_queue_depth": 128, 00:04:21.449 "max_io_qpairs_per_ctrlr": 127, 00:04:21.449 "in_capsule_data_size": 4096, 00:04:21.449 "max_io_size": 131072, 00:04:21.449 "io_unit_size": 131072, 00:04:21.449 "max_aq_depth": 128, 00:04:21.449 "num_shared_buffers": 511, 00:04:21.449 "buf_cache_size": 4294967295, 00:04:21.449 "dif_insert_or_strip": false, 00:04:21.449 "zcopy": false, 00:04:21.449 "c2h_success": true, 00:04:21.449 "sock_priority": 0, 00:04:21.449 "abort_timeout_sec": 1, 00:04:21.449 "ack_timeout": 0, 00:04:21.449 "data_wr_pool_size": 0 00:04:21.449 } 00:04:21.449 } 00:04:21.449 ] 00:04:21.449 }, 00:04:21.449 { 00:04:21.449 "subsystem": "iscsi", 00:04:21.449 "config": [ 00:04:21.449 { 00:04:21.449 "method": "iscsi_set_options", 00:04:21.449 "params": { 00:04:21.449 "node_base": "iqn.2016-06.io.spdk", 00:04:21.449 "max_sessions": 128, 00:04:21.449 "max_connections_per_session": 2, 00:04:21.450 "max_queue_depth": 64, 00:04:21.450 "default_time2wait": 2, 00:04:21.450 "default_time2retain": 20, 00:04:21.450 "first_burst_length": 8192, 00:04:21.450 "immediate_data": true, 00:04:21.450 "allow_duplicated_isid": false, 00:04:21.450 "error_recovery_level": 0, 00:04:21.450 "nop_timeout": 60, 00:04:21.450 "nop_in_interval": 30, 00:04:21.450 "disable_chap": false, 00:04:21.450 "require_chap": false, 00:04:21.450 "mutual_chap": false, 00:04:21.450 "chap_group": 0, 00:04:21.450 "max_large_datain_per_connection": 64, 00:04:21.450 "max_r2t_per_connection": 4, 00:04:21.450 "pdu_pool_size": 36864, 00:04:21.450 "immediate_data_pool_size": 16384, 00:04:21.450 "data_out_pool_size": 2048 00:04:21.450 } 00:04:21.450 } 00:04:21.450 ] 00:04:21.450 } 00:04:21.450 ] 00:04:21.450 } 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 246225 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 246225 ']' 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 246225 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 246225 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 246225' 00:04:21.450 killing process with pid 246225 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 246225 00:04:21.450 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 246225 00:04:21.709 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=246251 00:04:21.709 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.709 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 246251 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 246251 ']' 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 246251 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 246251 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 246251' 00:04:26.979 killing process with pid 246251 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 246251 00:04:26.979 12:14:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 246251 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.238 00:04:27.238 real 0m6.290s 00:04:27.238 user 0m5.982s 00:04:27.238 sys 0m0.608s 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.238 ************************************ 00:04:27.238 END TEST skip_rpc_with_json 00:04:27.238 ************************************ 00:04:27.238 12:14:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:27.238 12:14:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.238 12:14:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.238 12:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.238 ************************************ 00:04:27.238 START TEST skip_rpc_with_delay 00:04:27.238 ************************************ 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.238 [2024-11-20 12:14:10.250965] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.238 00:04:27.238 real 0m0.075s 00:04:27.238 user 0m0.047s 00:04:27.238 sys 0m0.027s 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.238 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:27.238 ************************************ 00:04:27.238 END TEST skip_rpc_with_delay 00:04:27.238 ************************************ 00:04:27.238 12:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:27.238 12:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:27.238 12:14:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:27.238 12:14:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.238 12:14:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.238 12:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.238 ************************************ 00:04:27.238 START TEST exit_on_failed_rpc_init 00:04:27.238 ************************************ 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=247272 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 247272 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 247272 ']' 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.238 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.497 [2024-11-20 12:14:10.393514] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:27.497 [2024-11-20 12:14:10.393563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247272 ] 00:04:27.497 [2024-11-20 12:14:10.469972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.497 [2024-11-20 12:14:10.513084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.755 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.755 [2024-11-20 12:14:10.788225] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:27.755 [2024-11-20 12:14:10.788270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247445 ] 00:04:27.755 [2024-11-20 12:14:10.863989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.013 [2024-11-20 12:14:10.905856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.013 [2024-11-20 12:14:10.905927] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:28.013 [2024-11-20 12:14:10.905936] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:28.013 [2024-11-20 12:14:10.905946] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 247272 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 247272 ']' 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 247272 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.013 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 247272 00:04:28.013 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.013 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.013 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 247272' 00:04:28.013 killing process with pid 247272 00:04:28.013 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 247272 00:04:28.013 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 247272 00:04:28.272 00:04:28.272 real 0m0.963s 00:04:28.272 user 0m1.021s 00:04:28.272 sys 0m0.393s 00:04:28.272 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.272 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.272 ************************************ 00:04:28.272 END TEST exit_on_failed_rpc_init 00:04:28.272 ************************************ 00:04:28.272 12:14:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.272 00:04:28.272 real 0m13.165s 00:04:28.272 user 0m12.390s 00:04:28.272 sys 0m1.594s 00:04:28.272 12:14:11 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.272 12:14:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.272 ************************************ 00:04:28.272 END TEST skip_rpc 00:04:28.272 ************************************ 00:04:28.272 12:14:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.272 12:14:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.272 12:14:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.272 12:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:28.530 ************************************ 00:04:28.530 START TEST rpc_client 00:04:28.530 ************************************ 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.530 * Looking for test storage... 00:04:28.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.530 12:14:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.530 --rc genhtml_branch_coverage=1 00:04:28.530 --rc genhtml_function_coverage=1 00:04:28.530 --rc genhtml_legend=1 00:04:28.530 --rc geninfo_all_blocks=1 00:04:28.530 --rc geninfo_unexecuted_blocks=1 00:04:28.530 00:04:28.530 ' 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.530 --rc genhtml_branch_coverage=1 00:04:28.530 --rc genhtml_function_coverage=1 00:04:28.530 --rc genhtml_legend=1 00:04:28.530 --rc geninfo_all_blocks=1 00:04:28.530 --rc geninfo_unexecuted_blocks=1 00:04:28.530 00:04:28.530 ' 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.530 --rc genhtml_branch_coverage=1 00:04:28.530 --rc genhtml_function_coverage=1 00:04:28.530 --rc genhtml_legend=1 00:04:28.530 --rc geninfo_all_blocks=1 00:04:28.530 --rc geninfo_unexecuted_blocks=1 00:04:28.530 00:04:28.530 ' 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.530 --rc genhtml_branch_coverage=1 00:04:28.530 --rc genhtml_function_coverage=1 00:04:28.530 --rc genhtml_legend=1 00:04:28.530 --rc geninfo_all_blocks=1 00:04:28.530 --rc geninfo_unexecuted_blocks=1 00:04:28.530 00:04:28.530 ' 00:04:28.530 12:14:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.530 OK 00:04:28.530 12:14:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.530 00:04:28.530 real 0m0.198s 00:04:28.530 user 0m0.119s 00:04:28.530 sys 0m0.092s 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.530 12:14:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.530 ************************************ 00:04:28.530 END TEST rpc_client 00:04:28.530 ************************************ 00:04:28.789 12:14:11 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.789 12:14:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.789 12:14:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.789 12:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:28.789 ************************************ 00:04:28.789 START TEST json_config 00:04:28.789 ************************************ 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.789 12:14:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.789 12:14:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.789 12:14:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.789 12:14:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.789 12:14:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.789 12:14:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:28.789 12:14:11 json_config -- scripts/common.sh@345 -- # : 1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.789 12:14:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.789 12:14:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@353 -- # local d=1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.789 12:14:11 json_config -- scripts/common.sh@355 -- # echo 1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.789 12:14:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@353 -- # local d=2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.789 12:14:11 json_config -- scripts/common.sh@355 -- # echo 2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.789 12:14:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.789 12:14:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.789 12:14:11 json_config -- scripts/common.sh@368 -- # return 0 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.789 --rc genhtml_branch_coverage=1 00:04:28.789 --rc genhtml_function_coverage=1 00:04:28.789 --rc genhtml_legend=1 00:04:28.789 --rc geninfo_all_blocks=1 00:04:28.789 --rc geninfo_unexecuted_blocks=1 00:04:28.789 00:04:28.789 ' 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.789 --rc genhtml_branch_coverage=1 00:04:28.789 --rc genhtml_function_coverage=1 00:04:28.789 --rc genhtml_legend=1 00:04:28.789 --rc geninfo_all_blocks=1 00:04:28.789 --rc geninfo_unexecuted_blocks=1 00:04:28.789 00:04:28.789 ' 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.789 --rc genhtml_branch_coverage=1 00:04:28.789 --rc genhtml_function_coverage=1 00:04:28.789 --rc genhtml_legend=1 00:04:28.789 --rc geninfo_all_blocks=1 00:04:28.789 --rc geninfo_unexecuted_blocks=1 00:04:28.789 00:04:28.789 ' 00:04:28.789 12:14:11 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.789 --rc genhtml_branch_coverage=1 00:04:28.789 --rc genhtml_function_coverage=1 00:04:28.789 --rc genhtml_legend=1 00:04:28.789 --rc geninfo_all_blocks=1 00:04:28.789 --rc geninfo_unexecuted_blocks=1 00:04:28.789 00:04:28.789 ' 00:04:28.789 12:14:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.789 12:14:11 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.789 12:14:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.789 12:14:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.789 12:14:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.789 12:14:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.789 12:14:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.789 12:14:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.789 12:14:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.789 12:14:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.790 12:14:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@51 -- # : 0 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.790 12:14:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:28.790 INFO: JSON configuration test init 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.790 12:14:11 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.790 12:14:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.790 12:14:11 json_config -- json_config/common.sh@10 -- # shift 00:04:28.790 12:14:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.790 12:14:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.790 12:14:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.790 12:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.790 12:14:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.790 12:14:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=247733 00:04:28.790 12:14:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.790 Waiting for target to run... 00:04:28.790 12:14:11 json_config -- json_config/common.sh@25 -- # waitforlisten 247733 /var/tmp/spdk_tgt.sock 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 247733 ']' 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.790 12:14:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.790 12:14:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.048 [2024-11-20 12:14:11.930822] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:29.048 [2024-11-20 12:14:11.930874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247733 ] 00:04:29.306 [2024-11-20 12:14:12.384204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.563 [2024-11-20 12:14:12.437385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.821 12:14:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.821 12:14:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:29.821 12:14:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.821 00:04:29.821 12:14:12 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:29.821 12:14:12 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:29.821 12:14:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.821 12:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.821 12:14:12 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:29.821 12:14:12 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:29.821 12:14:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.821 12:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.821 12:14:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.821 12:14:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:29.821 12:14:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:33.102 12:14:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.102 12:14:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:33.102 12:14:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:33.102 12:14:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@54 -- # sort 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:33.102 12:14:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:33.103 12:14:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.103 12:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:33.103 12:14:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.103 12:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:33.103 12:14:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.103 12:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.361 MallocForNvmf0 00:04:33.361 12:14:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.361 12:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.620 MallocForNvmf1 00:04:33.620 12:14:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.620 12:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.620 [2024-11-20 12:14:16.734990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.878 12:14:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.878 12:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.878 12:14:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.878 12:14:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:34.136 12:14:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.136 12:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.393 12:14:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.393 12:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.653 [2024-11-20 12:14:17.541501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.653 12:14:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:34.653 12:14:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.653 12:14:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.653 12:14:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:34.653 12:14:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.653 12:14:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.653 12:14:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:34.653 12:14:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.653 12:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.910 MallocBdevForConfigChangeCheck 00:04:34.910 12:14:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:34.910 12:14:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.910 12:14:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.910 12:14:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:34.910 12:14:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.168 12:14:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:35.168 INFO: shutting down applications... 00:04:35.168 12:14:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:35.168 12:14:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:35.168 12:14:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:35.168 12:14:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:37.078 Calling clear_iscsi_subsystem 00:04:37.078 Calling clear_nvmf_subsystem 00:04:37.078 Calling clear_nbd_subsystem 00:04:37.078 Calling clear_ublk_subsystem 00:04:37.078 Calling clear_vhost_blk_subsystem 00:04:37.078 Calling clear_vhost_scsi_subsystem 00:04:37.078 Calling clear_bdev_subsystem 00:04:37.078 12:14:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:37.078 12:14:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:37.078 12:14:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:37.078 12:14:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.078 12:14:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:37.078 12:14:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:37.078 12:14:20 json_config -- json_config/json_config.sh@352 -- # break 00:04:37.078 12:14:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:37.078 12:14:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:37.078 12:14:20 json_config -- json_config/common.sh@31 -- # local app=target 00:04:37.078 12:14:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.078 12:14:20 json_config -- json_config/common.sh@35 -- # [[ -n 247733 ]] 00:04:37.078 12:14:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 247733 00:04:37.078 12:14:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.078 12:14:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.078 12:14:20 json_config -- json_config/common.sh@41 -- # kill -0 247733 00:04:37.078 12:14:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.646 12:14:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.646 12:14:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.646 12:14:20 json_config -- json_config/common.sh@41 -- # kill -0 247733 00:04:37.646 12:14:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.646 12:14:20 json_config -- json_config/common.sh@43 -- # break 00:04:37.646 12:14:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.646 12:14:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.646 SPDK target shutdown done 00:04:37.646 12:14:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:37.646 INFO: relaunching applications... 00:04:37.646 12:14:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.646 12:14:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.646 12:14:20 json_config -- json_config/common.sh@10 -- # shift 00:04:37.646 12:14:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.646 12:14:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.646 12:14:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.646 12:14:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.646 12:14:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.646 12:14:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=249319 00:04:37.646 12:14:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.646 Waiting for target to run... 00:04:37.646 12:14:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.646 12:14:20 json_config -- json_config/common.sh@25 -- # waitforlisten 249319 /var/tmp/spdk_tgt.sock 00:04:37.646 12:14:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 249319 ']' 00:04:37.646 12:14:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.646 12:14:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.646 12:14:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.646 12:14:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.646 12:14:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.646 [2024-11-20 12:14:20.713168] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:37.646 [2024-11-20 12:14:20.713221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249319 ] 00:04:38.213 [2024-11-20 12:14:21.169598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.213 [2024-11-20 12:14:21.227871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.501 [2024-11-20 12:14:24.260916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.501 [2024-11-20 12:14:24.293279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.068 12:14:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.068 12:14:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:42.068 12:14:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.068 00:04:42.068 12:14:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:42.068 12:14:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:42.068 INFO: Checking if target configuration is the same... 00:04:42.068 12:14:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:42.068 12:14:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.068 12:14:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.068 + '[' 2 -ne 2 ']' 00:04:42.068 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.068 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.069 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.069 +++ basename /dev/fd/62 00:04:42.069 ++ mktemp /tmp/62.XXX 00:04:42.069 + tmp_file_1=/tmp/62.zQm 00:04:42.069 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.069 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.069 + tmp_file_2=/tmp/spdk_tgt_config.json.An6 00:04:42.069 + ret=0 00:04:42.069 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.327 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.327 + diff -u /tmp/62.zQm /tmp/spdk_tgt_config.json.An6 00:04:42.327 + echo 'INFO: JSON config files are the same' 00:04:42.327 INFO: JSON config files are the same 00:04:42.327 + rm /tmp/62.zQm /tmp/spdk_tgt_config.json.An6 00:04:42.327 + exit 0 00:04:42.327 12:14:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:42.327 12:14:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:42.327 INFO: changing configuration and checking if this can be detected... 00:04:42.327 12:14:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.327 12:14:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.586 12:14:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:42.586 12:14:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.586 12:14:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.586 + '[' 2 -ne 2 ']' 00:04:42.586 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.586 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.586 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.586 +++ basename /dev/fd/62 00:04:42.586 ++ mktemp /tmp/62.XXX 00:04:42.586 + tmp_file_1=/tmp/62.naS 00:04:42.586 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.586 + tmp_file_2=/tmp/spdk_tgt_config.json.ddt 00:04:42.587 + ret=0 00:04:42.587 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.845 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.845 + diff -u /tmp/62.naS /tmp/spdk_tgt_config.json.ddt 00:04:42.845 + ret=1 00:04:42.845 + echo '=== Start of file: /tmp/62.naS ===' 00:04:42.845 + cat /tmp/62.naS 00:04:42.845 + echo '=== End of file: /tmp/62.naS ===' 00:04:42.845 + echo '' 00:04:42.845 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ddt ===' 00:04:42.845 + cat /tmp/spdk_tgt_config.json.ddt 00:04:42.845 + echo '=== End of file: /tmp/spdk_tgt_config.json.ddt ===' 00:04:42.845 + echo '' 00:04:42.845 + rm /tmp/62.naS /tmp/spdk_tgt_config.json.ddt 00:04:42.845 + exit 1 00:04:42.845 12:14:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:42.845 INFO: configuration change detected. 00:04:42.845 12:14:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:42.845 12:14:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:42.845 12:14:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.845 12:14:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 249319 ]] 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:43.105 12:14:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.105 12:14:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:43.105 12:14:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:43.105 12:14:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.105 12:14:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.105 12:14:26 json_config -- json_config/json_config.sh@330 -- # killprocess 249319 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@954 -- # '[' -z 249319 ']' 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@958 -- # kill -0 249319 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@959 -- # uname 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 249319 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 249319' 00:04:43.105 killing process with pid 249319 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@973 -- # kill 249319 00:04:43.105 12:14:26 json_config -- common/autotest_common.sh@978 -- # wait 249319 00:04:44.485 12:14:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.485 12:14:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:44.485 12:14:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.485 12:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.485 12:14:27 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:44.485 12:14:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:44.485 INFO: Success 00:04:44.485 00:04:44.485 real 0m15.892s 00:04:44.485 user 0m16.398s 00:04:44.485 sys 0m2.777s 00:04:44.485 12:14:27 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.485 12:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.485 ************************************ 00:04:44.485 END TEST json_config 00:04:44.485 ************************************ 00:04:44.745 12:14:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.745 12:14:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.745 12:14:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.745 12:14:27 -- common/autotest_common.sh@10 -- # set +x 00:04:44.745 ************************************ 00:04:44.745 START TEST json_config_extra_key 00:04:44.745 ************************************ 00:04:44.745 12:14:27 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:44.745 12:14:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.745 12:14:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.745 12:14:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.745 12:14:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.745 12:14:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:44.746 12:14:27 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.746 12:14:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.746 --rc genhtml_branch_coverage=1 00:04:44.746 --rc genhtml_function_coverage=1 00:04:44.746 --rc genhtml_legend=1 00:04:44.746 --rc geninfo_all_blocks=1 00:04:44.746 --rc geninfo_unexecuted_blocks=1 00:04:44.746 00:04:44.746 ' 00:04:44.746 12:14:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.746 --rc genhtml_branch_coverage=1 00:04:44.746 --rc genhtml_function_coverage=1 00:04:44.746 --rc genhtml_legend=1 00:04:44.746 --rc geninfo_all_blocks=1 00:04:44.746 --rc geninfo_unexecuted_blocks=1 00:04:44.746 00:04:44.746 ' 00:04:44.746 12:14:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.746 --rc genhtml_branch_coverage=1 00:04:44.746 --rc genhtml_function_coverage=1 00:04:44.746 --rc genhtml_legend=1 00:04:44.746 --rc geninfo_all_blocks=1 00:04:44.746 --rc geninfo_unexecuted_blocks=1 00:04:44.746 00:04:44.746 ' 00:04:44.746 12:14:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.746 --rc genhtml_branch_coverage=1 00:04:44.746 --rc genhtml_function_coverage=1 00:04:44.746 --rc genhtml_legend=1 00:04:44.746 --rc geninfo_all_blocks=1 00:04:44.746 --rc geninfo_unexecuted_blocks=1 00:04:44.746 00:04:44.746 ' 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.746 12:14:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.746 12:14:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.746 12:14:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.746 12:14:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.746 12:14:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.746 12:14:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.746 12:14:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.746 INFO: launching applications... 00:04:44.746 12:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=250597 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.746 Waiting for target to run... 00:04:44.746 12:14:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 250597 /var/tmp/spdk_tgt.sock 00:04:44.746 12:14:27 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 250597 ']' 00:04:44.747 12:14:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:44.747 12:14:27 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.747 12:14:27 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.747 12:14:27 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.747 12:14:27 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.747 12:14:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.006 [2024-11-20 12:14:27.887711] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:45.006 [2024-11-20 12:14:27.887760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250597 ] 00:04:45.264 [2024-11-20 12:14:28.338285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.522 [2024-11-20 12:14:28.392706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.781 12:14:28 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.781 12:14:28 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.781 00:04:45.781 12:14:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.781 INFO: shutting down applications... 00:04:45.781 12:14:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 250597 ]] 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 250597 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 250597 00:04:45.781 12:14:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 250597 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.349 12:14:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.349 SPDK target shutdown done 00:04:46.349 12:14:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.349 Success 00:04:46.349 00:04:46.349 real 0m1.585s 00:04:46.349 user 0m1.221s 00:04:46.349 sys 0m0.555s 00:04:46.349 12:14:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.349 12:14:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.349 ************************************ 00:04:46.349 END TEST json_config_extra_key 00:04:46.349 ************************************ 00:04:46.349 12:14:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.349 12:14:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.349 12:14:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.349 12:14:29 -- common/autotest_common.sh@10 -- # set +x 00:04:46.349 ************************************ 00:04:46.349 START TEST alias_rpc 00:04:46.349 ************************************ 00:04:46.349 12:14:29 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.349 * Looking for test storage... 00:04:46.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:46.349 12:14:29 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.349 12:14:29 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.349 12:14:29 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.349 12:14:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.349 12:14:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.608 12:14:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.608 12:14:29 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.608 12:14:29 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.608 --rc genhtml_branch_coverage=1 00:04:46.608 --rc genhtml_function_coverage=1 00:04:46.608 --rc genhtml_legend=1 00:04:46.608 --rc geninfo_all_blocks=1 00:04:46.608 --rc geninfo_unexecuted_blocks=1 00:04:46.608 00:04:46.608 ' 00:04:46.608 12:14:29 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.608 --rc genhtml_branch_coverage=1 00:04:46.608 --rc genhtml_function_coverage=1 00:04:46.608 --rc genhtml_legend=1 00:04:46.608 --rc geninfo_all_blocks=1 00:04:46.608 --rc geninfo_unexecuted_blocks=1 00:04:46.608 00:04:46.608 ' 00:04:46.608 12:14:29 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.608 --rc genhtml_branch_coverage=1 00:04:46.608 --rc genhtml_function_coverage=1 00:04:46.608 --rc genhtml_legend=1 00:04:46.608 --rc geninfo_all_blocks=1 00:04:46.608 --rc geninfo_unexecuted_blocks=1 00:04:46.608 00:04:46.608 ' 00:04:46.608 12:14:29 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.608 --rc genhtml_branch_coverage=1 00:04:46.609 --rc genhtml_function_coverage=1 00:04:46.609 --rc genhtml_legend=1 00:04:46.609 --rc geninfo_all_blocks=1 00:04:46.609 --rc geninfo_unexecuted_blocks=1 00:04:46.609 00:04:46.609 ' 00:04:46.609 12:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.609 12:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=250922 00:04:46.609 12:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 250922 00:04:46.609 12:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.609 12:14:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 250922 ']' 00:04:46.609 12:14:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.609 12:14:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.609 12:14:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.609 12:14:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.609 12:14:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.609 [2024-11-20 12:14:29.528999] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:46.609 [2024-11-20 12:14:29.529046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250922 ] 00:04:46.609 [2024-11-20 12:14:29.604749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.609 [2024-11-20 12:14:29.648442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.868 12:14:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.868 12:14:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.868 12:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:47.126 12:14:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 250922 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 250922 ']' 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 250922 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 250922 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 250922' 00:04:47.126 killing process with pid 250922 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 250922 00:04:47.126 12:14:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 250922 00:04:47.385 00:04:47.385 real 0m1.158s 00:04:47.385 user 0m1.195s 00:04:47.385 sys 0m0.415s 00:04:47.385 12:14:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.385 12:14:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.385 ************************************ 00:04:47.385 END TEST alias_rpc 00:04:47.385 ************************************ 00:04:47.385 12:14:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:47.385 12:14:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.385 12:14:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.385 12:14:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.385 12:14:30 -- common/autotest_common.sh@10 -- # set +x 00:04:47.645 ************************************ 00:04:47.645 START TEST spdkcli_tcp 00:04:47.645 ************************************ 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:47.645 * Looking for test storage... 00:04:47.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.645 12:14:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.645 --rc genhtml_branch_coverage=1 00:04:47.645 --rc genhtml_function_coverage=1 00:04:47.645 --rc genhtml_legend=1 00:04:47.645 --rc geninfo_all_blocks=1 00:04:47.645 --rc geninfo_unexecuted_blocks=1 00:04:47.645 00:04:47.645 ' 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.645 --rc genhtml_branch_coverage=1 00:04:47.645 --rc genhtml_function_coverage=1 00:04:47.645 --rc genhtml_legend=1 00:04:47.645 --rc geninfo_all_blocks=1 00:04:47.645 --rc geninfo_unexecuted_blocks=1 00:04:47.645 00:04:47.645 ' 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.645 --rc genhtml_branch_coverage=1 00:04:47.645 --rc genhtml_function_coverage=1 00:04:47.645 --rc genhtml_legend=1 00:04:47.645 --rc geninfo_all_blocks=1 00:04:47.645 --rc geninfo_unexecuted_blocks=1 00:04:47.645 00:04:47.645 ' 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.645 --rc genhtml_branch_coverage=1 00:04:47.645 --rc genhtml_function_coverage=1 00:04:47.645 --rc genhtml_legend=1 00:04:47.645 --rc geninfo_all_blocks=1 00:04:47.645 --rc geninfo_unexecuted_blocks=1 00:04:47.645 00:04:47.645 ' 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=251172 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:47.645 12:14:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 251172 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 251172 ']' 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.645 12:14:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.645 [2024-11-20 12:14:30.755612] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:47.645 [2024-11-20 12:14:30.755658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251172 ] 00:04:47.905 [2024-11-20 12:14:30.834013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.905 [2024-11-20 12:14:30.877875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.905 [2024-11-20 12:14:30.877877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.165 12:14:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.165 12:14:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:48.165 12:14:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=251374 00:04:48.165 12:14:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.165 12:14:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.165 [ 00:04:48.165 "bdev_malloc_delete", 00:04:48.165 "bdev_malloc_create", 00:04:48.165 "bdev_null_resize", 00:04:48.165 "bdev_null_delete", 00:04:48.165 "bdev_null_create", 00:04:48.165 "bdev_nvme_cuse_unregister", 00:04:48.165 "bdev_nvme_cuse_register", 00:04:48.165 "bdev_opal_new_user", 00:04:48.165 "bdev_opal_set_lock_state", 00:04:48.165 "bdev_opal_delete", 00:04:48.165 "bdev_opal_get_info", 00:04:48.165 "bdev_opal_create", 00:04:48.165 "bdev_nvme_opal_revert", 00:04:48.165 "bdev_nvme_opal_init", 00:04:48.165 "bdev_nvme_send_cmd", 00:04:48.165 "bdev_nvme_set_keys", 00:04:48.165 "bdev_nvme_get_path_iostat", 00:04:48.165 "bdev_nvme_get_mdns_discovery_info", 00:04:48.165 "bdev_nvme_stop_mdns_discovery", 00:04:48.165 "bdev_nvme_start_mdns_discovery", 00:04:48.165 "bdev_nvme_set_multipath_policy", 00:04:48.165 "bdev_nvme_set_preferred_path", 00:04:48.165 "bdev_nvme_get_io_paths", 00:04:48.165 "bdev_nvme_remove_error_injection", 00:04:48.165 "bdev_nvme_add_error_injection", 00:04:48.165 "bdev_nvme_get_discovery_info", 00:04:48.165 "bdev_nvme_stop_discovery", 00:04:48.165 "bdev_nvme_start_discovery", 00:04:48.165 "bdev_nvme_get_controller_health_info", 00:04:48.165 "bdev_nvme_disable_controller", 00:04:48.165 "bdev_nvme_enable_controller", 00:04:48.165 "bdev_nvme_reset_controller", 00:04:48.165 "bdev_nvme_get_transport_statistics", 00:04:48.165 "bdev_nvme_apply_firmware", 00:04:48.165 "bdev_nvme_detach_controller", 00:04:48.165 "bdev_nvme_get_controllers", 00:04:48.165 "bdev_nvme_attach_controller", 00:04:48.165 "bdev_nvme_set_hotplug", 00:04:48.165 "bdev_nvme_set_options", 00:04:48.165 "bdev_passthru_delete", 00:04:48.165 "bdev_passthru_create", 00:04:48.165 "bdev_lvol_set_parent_bdev", 00:04:48.165 "bdev_lvol_set_parent", 00:04:48.165 "bdev_lvol_check_shallow_copy", 00:04:48.165 "bdev_lvol_start_shallow_copy", 00:04:48.165 "bdev_lvol_grow_lvstore", 00:04:48.165 "bdev_lvol_get_lvols", 00:04:48.165 "bdev_lvol_get_lvstores", 00:04:48.165 "bdev_lvol_delete", 00:04:48.165 "bdev_lvol_set_read_only", 00:04:48.165 "bdev_lvol_resize", 00:04:48.165 "bdev_lvol_decouple_parent", 00:04:48.165 "bdev_lvol_inflate", 00:04:48.165 "bdev_lvol_rename", 00:04:48.165 "bdev_lvol_clone_bdev", 00:04:48.165 "bdev_lvol_clone", 00:04:48.165 "bdev_lvol_snapshot", 00:04:48.165 "bdev_lvol_create", 00:04:48.165 "bdev_lvol_delete_lvstore", 00:04:48.165 "bdev_lvol_rename_lvstore", 00:04:48.165 "bdev_lvol_create_lvstore", 00:04:48.165 "bdev_raid_set_options", 00:04:48.165 "bdev_raid_remove_base_bdev", 00:04:48.165 "bdev_raid_add_base_bdev", 00:04:48.165 "bdev_raid_delete", 00:04:48.165 "bdev_raid_create", 00:04:48.165 "bdev_raid_get_bdevs", 00:04:48.165 "bdev_error_inject_error", 00:04:48.165 "bdev_error_delete", 00:04:48.165 "bdev_error_create", 00:04:48.165 "bdev_split_delete", 00:04:48.165 "bdev_split_create", 00:04:48.165 "bdev_delay_delete", 00:04:48.165 "bdev_delay_create", 00:04:48.165 "bdev_delay_update_latency", 00:04:48.165 "bdev_zone_block_delete", 00:04:48.165 "bdev_zone_block_create", 00:04:48.165 "blobfs_create", 00:04:48.165 "blobfs_detect", 00:04:48.165 "blobfs_set_cache_size", 00:04:48.165 "bdev_aio_delete", 00:04:48.165 "bdev_aio_rescan", 00:04:48.165 "bdev_aio_create", 00:04:48.165 "bdev_ftl_set_property", 00:04:48.165 "bdev_ftl_get_properties", 00:04:48.165 "bdev_ftl_get_stats", 00:04:48.165 "bdev_ftl_unmap", 00:04:48.165 "bdev_ftl_unload", 00:04:48.165 "bdev_ftl_delete", 00:04:48.165 "bdev_ftl_load", 00:04:48.165 "bdev_ftl_create", 00:04:48.165 "bdev_virtio_attach_controller", 00:04:48.165 "bdev_virtio_scsi_get_devices", 00:04:48.165 "bdev_virtio_detach_controller", 00:04:48.165 "bdev_virtio_blk_set_hotplug", 00:04:48.165 "bdev_iscsi_delete", 00:04:48.165 "bdev_iscsi_create", 00:04:48.165 "bdev_iscsi_set_options", 00:04:48.165 "accel_error_inject_error", 00:04:48.165 "ioat_scan_accel_module", 00:04:48.165 "dsa_scan_accel_module", 00:04:48.165 "iaa_scan_accel_module", 00:04:48.165 "vfu_virtio_create_fs_endpoint", 00:04:48.165 "vfu_virtio_create_scsi_endpoint", 00:04:48.165 "vfu_virtio_scsi_remove_target", 00:04:48.165 "vfu_virtio_scsi_add_target", 00:04:48.165 "vfu_virtio_create_blk_endpoint", 00:04:48.165 "vfu_virtio_delete_endpoint", 00:04:48.165 "keyring_file_remove_key", 00:04:48.165 "keyring_file_add_key", 00:04:48.165 "keyring_linux_set_options", 00:04:48.165 "fsdev_aio_delete", 00:04:48.165 "fsdev_aio_create", 00:04:48.165 "iscsi_get_histogram", 00:04:48.165 "iscsi_enable_histogram", 00:04:48.165 "iscsi_set_options", 00:04:48.165 "iscsi_get_auth_groups", 00:04:48.165 "iscsi_auth_group_remove_secret", 00:04:48.165 "iscsi_auth_group_add_secret", 00:04:48.165 "iscsi_delete_auth_group", 00:04:48.165 "iscsi_create_auth_group", 00:04:48.165 "iscsi_set_discovery_auth", 00:04:48.165 "iscsi_get_options", 00:04:48.165 "iscsi_target_node_request_logout", 00:04:48.165 "iscsi_target_node_set_redirect", 00:04:48.165 "iscsi_target_node_set_auth", 00:04:48.165 "iscsi_target_node_add_lun", 00:04:48.165 "iscsi_get_stats", 00:04:48.165 "iscsi_get_connections", 00:04:48.165 "iscsi_portal_group_set_auth", 00:04:48.165 "iscsi_start_portal_group", 00:04:48.165 "iscsi_delete_portal_group", 00:04:48.165 "iscsi_create_portal_group", 00:04:48.165 "iscsi_get_portal_groups", 00:04:48.165 "iscsi_delete_target_node", 00:04:48.165 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.165 "iscsi_target_node_add_pg_ig_maps", 00:04:48.165 "iscsi_create_target_node", 00:04:48.165 "iscsi_get_target_nodes", 00:04:48.165 "iscsi_delete_initiator_group", 00:04:48.165 "iscsi_initiator_group_remove_initiators", 00:04:48.165 "iscsi_initiator_group_add_initiators", 00:04:48.165 "iscsi_create_initiator_group", 00:04:48.165 "iscsi_get_initiator_groups", 00:04:48.165 "nvmf_set_crdt", 00:04:48.165 "nvmf_set_config", 00:04:48.165 "nvmf_set_max_subsystems", 00:04:48.165 "nvmf_stop_mdns_prr", 00:04:48.165 "nvmf_publish_mdns_prr", 00:04:48.165 "nvmf_subsystem_get_listeners", 00:04:48.165 "nvmf_subsystem_get_qpairs", 00:04:48.165 "nvmf_subsystem_get_controllers", 00:04:48.165 "nvmf_get_stats", 00:04:48.165 "nvmf_get_transports", 00:04:48.165 "nvmf_create_transport", 00:04:48.165 "nvmf_get_targets", 00:04:48.165 "nvmf_delete_target", 00:04:48.165 "nvmf_create_target", 00:04:48.165 "nvmf_subsystem_allow_any_host", 00:04:48.165 "nvmf_subsystem_set_keys", 00:04:48.165 "nvmf_subsystem_remove_host", 00:04:48.165 "nvmf_subsystem_add_host", 00:04:48.165 "nvmf_ns_remove_host", 00:04:48.165 "nvmf_ns_add_host", 00:04:48.165 "nvmf_subsystem_remove_ns", 00:04:48.165 "nvmf_subsystem_set_ns_ana_group", 00:04:48.165 "nvmf_subsystem_add_ns", 00:04:48.165 "nvmf_subsystem_listener_set_ana_state", 00:04:48.165 "nvmf_discovery_get_referrals", 00:04:48.165 "nvmf_discovery_remove_referral", 00:04:48.165 "nvmf_discovery_add_referral", 00:04:48.166 "nvmf_subsystem_remove_listener", 00:04:48.166 "nvmf_subsystem_add_listener", 00:04:48.166 "nvmf_delete_subsystem", 00:04:48.166 "nvmf_create_subsystem", 00:04:48.166 "nvmf_get_subsystems", 00:04:48.166 "env_dpdk_get_mem_stats", 00:04:48.166 "nbd_get_disks", 00:04:48.166 "nbd_stop_disk", 00:04:48.166 "nbd_start_disk", 00:04:48.166 "ublk_recover_disk", 00:04:48.166 "ublk_get_disks", 00:04:48.166 "ublk_stop_disk", 00:04:48.166 "ublk_start_disk", 00:04:48.166 "ublk_destroy_target", 00:04:48.166 "ublk_create_target", 00:04:48.166 "virtio_blk_create_transport", 00:04:48.166 "virtio_blk_get_transports", 00:04:48.166 "vhost_controller_set_coalescing", 00:04:48.166 "vhost_get_controllers", 00:04:48.166 "vhost_delete_controller", 00:04:48.166 "vhost_create_blk_controller", 00:04:48.166 "vhost_scsi_controller_remove_target", 00:04:48.166 "vhost_scsi_controller_add_target", 00:04:48.166 "vhost_start_scsi_controller", 00:04:48.166 "vhost_create_scsi_controller", 00:04:48.166 "thread_set_cpumask", 00:04:48.166 "scheduler_set_options", 00:04:48.166 "framework_get_governor", 00:04:48.166 "framework_get_scheduler", 00:04:48.166 "framework_set_scheduler", 00:04:48.166 "framework_get_reactors", 00:04:48.166 "thread_get_io_channels", 00:04:48.166 "thread_get_pollers", 00:04:48.166 "thread_get_stats", 00:04:48.166 "framework_monitor_context_switch", 00:04:48.166 "spdk_kill_instance", 00:04:48.166 "log_enable_timestamps", 00:04:48.166 "log_get_flags", 00:04:48.166 "log_clear_flag", 00:04:48.166 "log_set_flag", 00:04:48.166 "log_get_level", 00:04:48.166 "log_set_level", 00:04:48.166 "log_get_print_level", 00:04:48.166 "log_set_print_level", 00:04:48.166 "framework_enable_cpumask_locks", 00:04:48.166 "framework_disable_cpumask_locks", 00:04:48.166 "framework_wait_init", 00:04:48.166 "framework_start_init", 00:04:48.166 "scsi_get_devices", 00:04:48.166 "bdev_get_histogram", 00:04:48.166 "bdev_enable_histogram", 00:04:48.166 "bdev_set_qos_limit", 00:04:48.166 "bdev_set_qd_sampling_period", 00:04:48.166 "bdev_get_bdevs", 00:04:48.166 "bdev_reset_iostat", 00:04:48.166 "bdev_get_iostat", 00:04:48.166 "bdev_examine", 00:04:48.166 "bdev_wait_for_examine", 00:04:48.166 "bdev_set_options", 00:04:48.166 "accel_get_stats", 00:04:48.166 "accel_set_options", 00:04:48.166 "accel_set_driver", 00:04:48.166 "accel_crypto_key_destroy", 00:04:48.166 "accel_crypto_keys_get", 00:04:48.166 "accel_crypto_key_create", 00:04:48.166 "accel_assign_opc", 00:04:48.166 "accel_get_module_info", 00:04:48.166 "accel_get_opc_assignments", 00:04:48.166 "vmd_rescan", 00:04:48.166 "vmd_remove_device", 00:04:48.166 "vmd_enable", 00:04:48.166 "sock_get_default_impl", 00:04:48.166 "sock_set_default_impl", 00:04:48.166 "sock_impl_set_options", 00:04:48.166 "sock_impl_get_options", 00:04:48.166 "iobuf_get_stats", 00:04:48.166 "iobuf_set_options", 00:04:48.166 "keyring_get_keys", 00:04:48.166 "vfu_tgt_set_base_path", 00:04:48.166 "framework_get_pci_devices", 00:04:48.166 "framework_get_config", 00:04:48.166 "framework_get_subsystems", 00:04:48.166 "fsdev_set_opts", 00:04:48.166 "fsdev_get_opts", 00:04:48.166 "trace_get_info", 00:04:48.166 "trace_get_tpoint_group_mask", 00:04:48.166 "trace_disable_tpoint_group", 00:04:48.166 "trace_enable_tpoint_group", 00:04:48.166 "trace_clear_tpoint_mask", 00:04:48.166 "trace_set_tpoint_mask", 00:04:48.166 "notify_get_notifications", 00:04:48.166 "notify_get_types", 00:04:48.166 "spdk_get_version", 00:04:48.166 "rpc_get_methods" 00:04:48.166 ] 00:04:48.425 12:14:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.425 12:14:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.425 12:14:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.425 12:14:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.425 12:14:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 251172 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 251172 ']' 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 251172 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251172 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251172' 00:04:48.426 killing process with pid 251172 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 251172 00:04:48.426 12:14:31 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 251172 00:04:48.685 00:04:48.685 real 0m1.146s 00:04:48.685 user 0m1.940s 00:04:48.685 sys 0m0.434s 00:04:48.685 12:14:31 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.685 12:14:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.685 ************************************ 00:04:48.685 END TEST spdkcli_tcp 00:04:48.685 ************************************ 00:04:48.685 12:14:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.685 12:14:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.685 12:14:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.685 12:14:31 -- common/autotest_common.sh@10 -- # set +x 00:04:48.685 ************************************ 00:04:48.685 START TEST dpdk_mem_utility 00:04:48.685 ************************************ 00:04:48.685 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.945 * Looking for test storage... 00:04:48.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.945 12:14:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.945 --rc genhtml_branch_coverage=1 00:04:48.945 --rc genhtml_function_coverage=1 00:04:48.945 --rc genhtml_legend=1 00:04:48.945 --rc geninfo_all_blocks=1 00:04:48.945 --rc geninfo_unexecuted_blocks=1 00:04:48.945 00:04:48.945 ' 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.945 --rc genhtml_branch_coverage=1 00:04:48.945 --rc genhtml_function_coverage=1 00:04:48.945 --rc genhtml_legend=1 00:04:48.945 --rc geninfo_all_blocks=1 00:04:48.945 --rc geninfo_unexecuted_blocks=1 00:04:48.945 00:04:48.945 ' 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.945 --rc genhtml_branch_coverage=1 00:04:48.945 --rc genhtml_function_coverage=1 00:04:48.945 --rc genhtml_legend=1 00:04:48.945 --rc geninfo_all_blocks=1 00:04:48.945 --rc geninfo_unexecuted_blocks=1 00:04:48.945 00:04:48.945 ' 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.945 --rc genhtml_branch_coverage=1 00:04:48.945 --rc genhtml_function_coverage=1 00:04:48.945 --rc genhtml_legend=1 00:04:48.945 --rc geninfo_all_blocks=1 00:04:48.945 --rc geninfo_unexecuted_blocks=1 00:04:48.945 00:04:48.945 ' 00:04:48.945 12:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:48.945 12:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=251484 00:04:48.945 12:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 251484 00:04:48.945 12:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 251484 ']' 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.945 12:14:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.945 [2024-11-20 12:14:31.971428] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:48.945 [2024-11-20 12:14:31.971474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251484 ] 00:04:48.945 [2024-11-20 12:14:32.047533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.204 [2024-11-20 12:14:32.090920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.204 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.204 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:49.205 12:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.205 12:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.205 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.205 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.465 { 00:04:49.465 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.465 } 00:04:49.465 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.465 12:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.465 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:49.465 1 heaps totaling size 810.000000 MiB 00:04:49.465 size: 810.000000 MiB heap id: 0 00:04:49.465 end heaps---------- 00:04:49.465 9 mempools totaling size 595.772034 MiB 00:04:49.465 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.465 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.465 size: 92.545471 MiB name: bdev_io_251484 00:04:49.465 size: 50.003479 MiB name: msgpool_251484 00:04:49.465 size: 36.509338 MiB name: fsdev_io_251484 00:04:49.465 size: 21.763794 MiB name: PDU_Pool 00:04:49.465 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.465 size: 4.133484 MiB name: evtpool_251484 00:04:49.465 size: 0.026123 MiB name: Session_Pool 00:04:49.465 end mempools------- 00:04:49.465 6 memzones totaling size 4.142822 MiB 00:04:49.465 size: 1.000366 MiB name: RG_ring_0_251484 00:04:49.465 size: 1.000366 MiB name: RG_ring_1_251484 00:04:49.465 size: 1.000366 MiB name: RG_ring_4_251484 00:04:49.465 size: 1.000366 MiB name: RG_ring_5_251484 00:04:49.465 size: 0.125366 MiB name: RG_ring_2_251484 00:04:49.465 size: 0.015991 MiB name: RG_ring_3_251484 00:04:49.465 end memzones------- 00:04:49.465 12:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.465 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:49.465 list of free elements. size: 10.862488 MiB 00:04:49.465 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:49.465 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:49.465 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:49.465 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:49.465 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:49.465 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:49.465 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:49.465 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:49.465 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:49.465 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:49.465 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:49.465 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:49.465 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:49.465 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:49.465 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:49.465 list of standard malloc elements. size: 199.218628 MiB 00:04:49.465 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:49.465 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:49.465 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:49.465 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:49.465 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:49.465 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.465 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:49.465 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.465 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:49.465 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:49.465 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:49.465 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:49.465 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:49.465 list of memzone associated elements. size: 599.918884 MiB 00:04:49.465 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:49.465 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.465 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:49.465 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.465 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:49.465 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_251484_0 00:04:49.465 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:49.465 associated memzone info: size: 48.002930 MiB name: MP_msgpool_251484_0 00:04:49.465 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:49.465 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_251484_0 00:04:49.465 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:49.465 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.465 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:49.465 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.465 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:49.465 associated memzone info: size: 3.000122 MiB name: MP_evtpool_251484_0 00:04:49.465 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:49.465 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_251484 00:04:49.465 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.465 associated memzone info: size: 1.007996 MiB name: MP_evtpool_251484 00:04:49.465 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:49.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.465 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:49.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.465 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:49.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.465 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:49.465 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.465 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:49.465 associated memzone info: size: 1.000366 MiB name: RG_ring_0_251484 00:04:49.465 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:49.465 associated memzone info: size: 1.000366 MiB name: RG_ring_1_251484 00:04:49.465 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:49.465 associated memzone info: size: 1.000366 MiB name: RG_ring_4_251484 00:04:49.465 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:49.465 associated memzone info: size: 1.000366 MiB name: RG_ring_5_251484 00:04:49.465 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:49.465 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_251484 00:04:49.465 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:49.465 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_251484 00:04:49.465 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:49.465 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.465 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:49.466 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.466 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:49.466 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.466 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:49.466 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_251484 00:04:49.466 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:49.466 associated memzone info: size: 0.125366 MiB name: RG_ring_2_251484 00:04:49.466 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:49.466 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.466 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:49.466 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.466 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:49.466 associated memzone info: size: 0.015991 MiB name: RG_ring_3_251484 00:04:49.466 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:49.466 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.466 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:49.466 associated memzone info: size: 0.000183 MiB name: MP_msgpool_251484 00:04:49.466 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:49.466 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_251484 00:04:49.466 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:49.466 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_251484 00:04:49.466 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:49.466 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.466 12:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.466 12:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 251484 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 251484 ']' 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 251484 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251484 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251484' 00:04:49.466 killing process with pid 251484 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 251484 00:04:49.466 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 251484 00:04:49.725 00:04:49.725 real 0m1.030s 00:04:49.725 user 0m0.977s 00:04:49.725 sys 0m0.398s 00:04:49.725 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.725 12:14:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.725 ************************************ 00:04:49.725 END TEST dpdk_mem_utility 00:04:49.725 ************************************ 00:04:49.725 12:14:32 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:49.725 12:14:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.725 12:14:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.725 12:14:32 -- common/autotest_common.sh@10 -- # set +x 00:04:49.984 ************************************ 00:04:49.984 START TEST event 00:04:49.984 ************************************ 00:04:49.984 12:14:32 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:49.984 * Looking for test storage... 00:04:49.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:49.984 12:14:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.984 12:14:32 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.984 12:14:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.984 12:14:33 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.985 12:14:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.985 12:14:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.985 12:14:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.985 12:14:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.985 12:14:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.985 12:14:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.985 12:14:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.985 12:14:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.985 12:14:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.985 12:14:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.985 12:14:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.985 12:14:33 event -- scripts/common.sh@344 -- # case "$op" in 00:04:49.985 12:14:33 event -- scripts/common.sh@345 -- # : 1 00:04:49.985 12:14:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.985 12:14:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.985 12:14:33 event -- scripts/common.sh@365 -- # decimal 1 00:04:49.985 12:14:33 event -- scripts/common.sh@353 -- # local d=1 00:04:49.985 12:14:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.985 12:14:33 event -- scripts/common.sh@355 -- # echo 1 00:04:49.985 12:14:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.985 12:14:33 event -- scripts/common.sh@366 -- # decimal 2 00:04:49.985 12:14:33 event -- scripts/common.sh@353 -- # local d=2 00:04:49.985 12:14:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.985 12:14:33 event -- scripts/common.sh@355 -- # echo 2 00:04:49.985 12:14:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.985 12:14:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.985 12:14:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.985 12:14:33 event -- scripts/common.sh@368 -- # return 0 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.985 --rc genhtml_branch_coverage=1 00:04:49.985 --rc genhtml_function_coverage=1 00:04:49.985 --rc genhtml_legend=1 00:04:49.985 --rc geninfo_all_blocks=1 00:04:49.985 --rc geninfo_unexecuted_blocks=1 00:04:49.985 00:04:49.985 ' 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.985 --rc genhtml_branch_coverage=1 00:04:49.985 --rc genhtml_function_coverage=1 00:04:49.985 --rc genhtml_legend=1 00:04:49.985 --rc geninfo_all_blocks=1 00:04:49.985 --rc geninfo_unexecuted_blocks=1 00:04:49.985 00:04:49.985 ' 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.985 --rc genhtml_branch_coverage=1 00:04:49.985 --rc genhtml_function_coverage=1 00:04:49.985 --rc genhtml_legend=1 00:04:49.985 --rc geninfo_all_blocks=1 00:04:49.985 --rc geninfo_unexecuted_blocks=1 00:04:49.985 00:04:49.985 ' 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.985 --rc genhtml_branch_coverage=1 00:04:49.985 --rc genhtml_function_coverage=1 00:04:49.985 --rc genhtml_legend=1 00:04:49.985 --rc geninfo_all_blocks=1 00:04:49.985 --rc geninfo_unexecuted_blocks=1 00:04:49.985 00:04:49.985 ' 00:04:49.985 12:14:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:49.985 12:14:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:49.985 12:14:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:49.985 12:14:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.985 12:14:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.985 ************************************ 00:04:49.985 START TEST event_perf 00:04:49.985 ************************************ 00:04:49.985 12:14:33 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:49.985 Running I/O for 1 seconds...[2024-11-20 12:14:33.079652] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:49.985 [2024-11-20 12:14:33.079721] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251774 ] 00:04:50.244 [2024-11-20 12:14:33.159923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.244 [2024-11-20 12:14:33.203799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.244 [2024-11-20 12:14:33.203907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.244 [2024-11-20 12:14:33.204015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.244 [2024-11-20 12:14:33.204016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.181 Running I/O for 1 seconds... 00:04:51.181 lcore 0: 201381 00:04:51.181 lcore 1: 201380 00:04:51.181 lcore 2: 201378 00:04:51.181 lcore 3: 201379 00:04:51.181 done. 00:04:51.181 00:04:51.181 real 0m1.186s 00:04:51.181 user 0m4.103s 00:04:51.181 sys 0m0.079s 00:04:51.181 12:14:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.181 12:14:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.181 ************************************ 00:04:51.181 END TEST event_perf 00:04:51.181 ************************************ 00:04:51.181 12:14:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.181 12:14:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:51.181 12:14:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.181 12:14:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.440 ************************************ 00:04:51.440 START TEST event_reactor 00:04:51.440 ************************************ 00:04:51.440 12:14:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:51.440 [2024-11-20 12:14:34.326473] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:51.440 [2024-11-20 12:14:34.326539] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252026 ] 00:04:51.440 [2024-11-20 12:14:34.403209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.440 [2024-11-20 12:14:34.443291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.377 test_start 00:04:52.377 oneshot 00:04:52.377 tick 100 00:04:52.377 tick 100 00:04:52.377 tick 250 00:04:52.377 tick 100 00:04:52.377 tick 100 00:04:52.377 tick 250 00:04:52.377 tick 100 00:04:52.377 tick 500 00:04:52.377 tick 100 00:04:52.377 tick 100 00:04:52.377 tick 250 00:04:52.377 tick 100 00:04:52.377 tick 100 00:04:52.377 test_end 00:04:52.377 00:04:52.377 real 0m1.174s 00:04:52.377 user 0m1.095s 00:04:52.377 sys 0m0.075s 00:04:52.377 12:14:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.377 12:14:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:52.377 ************************************ 00:04:52.377 END TEST event_reactor 00:04:52.377 ************************************ 00:04:52.637 12:14:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.637 12:14:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:52.637 12:14:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.637 12:14:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.637 ************************************ 00:04:52.637 START TEST event_reactor_perf 00:04:52.637 ************************************ 00:04:52.637 12:14:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:52.637 [2024-11-20 12:14:35.570738] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:52.637 [2024-11-20 12:14:35.570812] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252273 ] 00:04:52.637 [2024-11-20 12:14:35.647589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.637 [2024-11-20 12:14:35.688528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.016 test_start 00:04:54.016 test_end 00:04:54.016 Performance: 499877 events per second 00:04:54.016 00:04:54.016 real 0m1.176s 00:04:54.016 user 0m1.099s 00:04:54.016 sys 0m0.073s 00:04:54.016 12:14:36 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.016 12:14:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.016 ************************************ 00:04:54.016 END TEST event_reactor_perf 00:04:54.016 ************************************ 00:04:54.016 12:14:36 event -- event/event.sh@49 -- # uname -s 00:04:54.016 12:14:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.016 12:14:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.016 12:14:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.016 12:14:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.016 12:14:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.016 ************************************ 00:04:54.016 START TEST event_scheduler 00:04:54.016 ************************************ 00:04:54.016 12:14:36 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:54.016 * Looking for test storage... 00:04:54.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:54.016 12:14:36 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.016 12:14:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.017 12:14:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.017 --rc genhtml_branch_coverage=1 00:04:54.017 --rc genhtml_function_coverage=1 00:04:54.017 --rc genhtml_legend=1 00:04:54.017 --rc geninfo_all_blocks=1 00:04:54.017 --rc geninfo_unexecuted_blocks=1 00:04:54.017 00:04:54.017 ' 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.017 --rc genhtml_branch_coverage=1 00:04:54.017 --rc genhtml_function_coverage=1 00:04:54.017 --rc genhtml_legend=1 00:04:54.017 --rc geninfo_all_blocks=1 00:04:54.017 --rc geninfo_unexecuted_blocks=1 00:04:54.017 00:04:54.017 ' 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.017 --rc genhtml_branch_coverage=1 00:04:54.017 --rc genhtml_function_coverage=1 00:04:54.017 --rc genhtml_legend=1 00:04:54.017 --rc geninfo_all_blocks=1 00:04:54.017 --rc geninfo_unexecuted_blocks=1 00:04:54.017 00:04:54.017 ' 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.017 --rc genhtml_branch_coverage=1 00:04:54.017 --rc genhtml_function_coverage=1 00:04:54.017 --rc genhtml_legend=1 00:04:54.017 --rc geninfo_all_blocks=1 00:04:54.017 --rc geninfo_unexecuted_blocks=1 00:04:54.017 00:04:54.017 ' 00:04:54.017 12:14:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.017 12:14:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=252558 00:04:54.017 12:14:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.017 12:14:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.017 12:14:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 252558 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 252558 ']' 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.017 12:14:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.017 [2024-11-20 12:14:37.020988] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:54.017 [2024-11-20 12:14:37.021040] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252558 ] 00:04:54.017 [2024-11-20 12:14:37.095834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.277 [2024-11-20 12:14:37.139674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.277 [2024-11-20 12:14:37.139788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.277 [2024-11-20 12:14:37.139873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.277 [2024-11-20 12:14:37.139873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:54.277 12:14:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 [2024-11-20 12:14:37.188543] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:54.277 [2024-11-20 12:14:37.188561] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.277 [2024-11-20 12:14:37.188570] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.277 [2024-11-20 12:14:37.188576] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.277 [2024-11-20 12:14:37.188581] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 [2024-11-20 12:14:37.262584] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 ************************************ 00:04:54.277 START TEST scheduler_create_thread 00:04:54.277 ************************************ 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 2 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 3 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 4 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 5 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 6 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 7 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 8 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 9 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.277 10 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.277 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.536 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.536 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.536 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.536 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.536 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.794 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.795 12:14:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.795 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.795 12:14:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.699 12:14:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.699 12:14:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.699 12:14:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.699 12:14:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.699 12:14:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.637 12:14:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.637 00:04:57.637 real 0m3.101s 00:04:57.637 user 0m0.024s 00:04:57.637 sys 0m0.006s 00:04:57.637 12:14:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.637 12:14:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.637 ************************************ 00:04:57.637 END TEST scheduler_create_thread 00:04:57.637 ************************************ 00:04:57.637 12:14:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.637 12:14:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 252558 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 252558 ']' 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 252558 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252558 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252558' 00:04:57.637 killing process with pid 252558 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 252558 00:04:57.637 12:14:40 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 252558 00:04:57.896 [2024-11-20 12:14:40.782048] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.896 00:04:57.896 real 0m4.168s 00:04:57.896 user 0m6.695s 00:04:57.896 sys 0m0.363s 00:04:57.896 12:14:40 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.896 12:14:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.896 ************************************ 00:04:57.896 END TEST event_scheduler 00:04:57.896 ************************************ 00:04:57.896 12:14:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.896 12:14:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.896 12:14:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.896 12:14:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.896 12:14:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 ************************************ 00:04:58.155 START TEST app_repeat 00:04:58.155 ************************************ 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=253297 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 253297' 00:04:58.155 Process app_repeat pid: 253297 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:58.155 spdk_app_start Round 0 00:04:58.155 12:14:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 253297 /var/tmp/spdk-nbd.sock 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 253297 ']' 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.155 12:14:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.155 [2024-11-20 12:14:41.083274] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:04:58.155 [2024-11-20 12:14:41.083329] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253297 ] 00:04:58.155 [2024-11-20 12:14:41.160046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.155 [2024-11-20 12:14:41.203319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.155 [2024-11-20 12:14:41.203320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.415 12:14:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.415 12:14:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:58.415 12:14:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.415 Malloc0 00:04:58.415 12:14:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.674 Malloc1 00:04:58.674 12:14:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.674 12:14:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.675 12:14:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.934 /dev/nbd0 00:04:58.934 12:14:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.934 12:14:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.934 1+0 records in 00:04:58.934 1+0 records out 00:04:58.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208671 s, 19.6 MB/s 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.934 12:14:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.934 12:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.934 12:14:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.934 12:14:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.193 /dev/nbd1 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.193 1+0 records in 00:04:59.193 1+0 records out 00:04:59.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244389 s, 16.8 MB/s 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.193 12:14:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.193 12:14:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.453 { 00:04:59.453 "nbd_device": "/dev/nbd0", 00:04:59.453 "bdev_name": "Malloc0" 00:04:59.453 }, 00:04:59.453 { 00:04:59.453 "nbd_device": "/dev/nbd1", 00:04:59.453 "bdev_name": "Malloc1" 00:04:59.453 } 00:04:59.453 ]' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.453 { 00:04:59.453 "nbd_device": "/dev/nbd0", 00:04:59.453 "bdev_name": "Malloc0" 00:04:59.453 }, 00:04:59.453 { 00:04:59.453 "nbd_device": "/dev/nbd1", 00:04:59.453 "bdev_name": "Malloc1" 00:04:59.453 } 00:04:59.453 ]' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.453 /dev/nbd1' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.453 /dev/nbd1' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.453 256+0 records in 00:04:59.453 256+0 records out 00:04:59.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103282 s, 102 MB/s 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.453 256+0 records in 00:04:59.453 256+0 records out 00:04:59.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143409 s, 73.1 MB/s 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.453 256+0 records in 00:04:59.453 256+0 records out 00:04:59.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148564 s, 70.6 MB/s 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.453 12:14:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.713 12:14:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.972 12:14:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.231 12:14:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.231 12:14:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.490 12:14:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.490 [2024-11-20 12:14:43.590074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.749 [2024-11-20 12:14:43.628226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.749 [2024-11-20 12:14:43.628227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.749 [2024-11-20 12:14:43.669396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.749 [2024-11-20 12:14:43.669439] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.038 12:14:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.038 12:14:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.038 spdk_app_start Round 1 00:05:04.038 12:14:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 253297 /var/tmp/spdk-nbd.sock 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 253297 ']' 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.038 12:14:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.038 12:14:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.038 Malloc0 00:05:04.038 12:14:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.038 Malloc1 00:05:04.038 12:14:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.038 12:14:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.298 /dev/nbd0 00:05:04.298 12:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.298 12:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.298 1+0 records in 00:05:04.298 1+0 records out 00:05:04.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232895 s, 17.6 MB/s 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.298 12:14:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.298 12:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.298 12:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.298 12:14:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.557 /dev/nbd1 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.557 1+0 records in 00:05:04.557 1+0 records out 00:05:04.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232306 s, 17.6 MB/s 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.557 12:14:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.557 12:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.816 { 00:05:04.816 "nbd_device": "/dev/nbd0", 00:05:04.816 "bdev_name": "Malloc0" 00:05:04.816 }, 00:05:04.816 { 00:05:04.816 "nbd_device": "/dev/nbd1", 00:05:04.816 "bdev_name": "Malloc1" 00:05:04.816 } 00:05:04.816 ]' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.816 { 00:05:04.816 "nbd_device": "/dev/nbd0", 00:05:04.816 "bdev_name": "Malloc0" 00:05:04.816 }, 00:05:04.816 { 00:05:04.816 "nbd_device": "/dev/nbd1", 00:05:04.816 "bdev_name": "Malloc1" 00:05:04.816 } 00:05:04.816 ]' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.816 /dev/nbd1' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.816 /dev/nbd1' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.816 256+0 records in 00:05:04.816 256+0 records out 00:05:04.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107127 s, 97.9 MB/s 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.816 256+0 records in 00:05:04.816 256+0 records out 00:05:04.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140129 s, 74.8 MB/s 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.816 256+0 records in 00:05:04.816 256+0 records out 00:05:04.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152044 s, 69.0 MB/s 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.816 12:14:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.076 12:14:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.335 12:14:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.594 12:14:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.594 12:14:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.854 12:14:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.854 [2024-11-20 12:14:48.934782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.113 [2024-11-20 12:14:48.972396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.113 [2024-11-20 12:14:48.972396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.113 [2024-11-20 12:14:49.014228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.113 [2024-11-20 12:14:49.014268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.403 12:14:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.403 12:14:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.403 spdk_app_start Round 2 00:05:09.403 12:14:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 253297 /var/tmp/spdk-nbd.sock 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 253297 ']' 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.403 12:14:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.403 12:14:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.403 Malloc0 00:05:09.403 12:14:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.403 Malloc1 00:05:09.403 12:14:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.403 12:14:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.662 /dev/nbd0 00:05:09.662 12:14:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.662 12:14:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.662 1+0 records in 00:05:09.662 1+0 records out 00:05:09.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023742 s, 17.3 MB/s 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.662 12:14:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.662 12:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.662 12:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.662 12:14:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.922 /dev/nbd1 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.922 1+0 records in 00:05:09.922 1+0 records out 00:05:09.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234439 s, 17.5 MB/s 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.922 12:14:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.922 12:14:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.181 { 00:05:10.181 "nbd_device": "/dev/nbd0", 00:05:10.181 "bdev_name": "Malloc0" 00:05:10.181 }, 00:05:10.181 { 00:05:10.181 "nbd_device": "/dev/nbd1", 00:05:10.181 "bdev_name": "Malloc1" 00:05:10.181 } 00:05:10.181 ]' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.181 { 00:05:10.181 "nbd_device": "/dev/nbd0", 00:05:10.181 "bdev_name": "Malloc0" 00:05:10.181 }, 00:05:10.181 { 00:05:10.181 "nbd_device": "/dev/nbd1", 00:05:10.181 "bdev_name": "Malloc1" 00:05:10.181 } 00:05:10.181 ]' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.181 /dev/nbd1' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.181 /dev/nbd1' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.181 256+0 records in 00:05:10.181 256+0 records out 00:05:10.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102514 s, 102 MB/s 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.181 256+0 records in 00:05:10.181 256+0 records out 00:05:10.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139313 s, 75.3 MB/s 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.181 256+0 records in 00:05:10.181 256+0 records out 00:05:10.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156817 s, 66.9 MB/s 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.181 12:14:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.182 12:14:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.441 12:14:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.700 12:14:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.959 12:14:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.959 12:14:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.218 12:14:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.218 [2024-11-20 12:14:54.272238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.218 [2024-11-20 12:14:54.309518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.218 [2024-11-20 12:14:54.309520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.477 [2024-11-20 12:14:54.351038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.477 [2024-11-20 12:14:54.351079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.013 12:14:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 253297 /var/tmp/spdk-nbd.sock 00:05:14.013 12:14:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 253297 ']' 00:05:14.013 12:14:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.013 12:14:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.013 12:14:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.013 12:14:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.013 12:14:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.272 12:14:57 event.app_repeat -- event/event.sh@39 -- # killprocess 253297 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 253297 ']' 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 253297 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253297 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253297' 00:05:14.272 killing process with pid 253297 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@973 -- # kill 253297 00:05:14.272 12:14:57 event.app_repeat -- common/autotest_common.sh@978 -- # wait 253297 00:05:14.531 spdk_app_start is called in Round 0. 00:05:14.532 Shutdown signal received, stop current app iteration 00:05:14.532 Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 reinitialization... 00:05:14.532 spdk_app_start is called in Round 1. 00:05:14.532 Shutdown signal received, stop current app iteration 00:05:14.532 Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 reinitialization... 00:05:14.532 spdk_app_start is called in Round 2. 00:05:14.532 Shutdown signal received, stop current app iteration 00:05:14.532 Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 reinitialization... 00:05:14.532 spdk_app_start is called in Round 3. 00:05:14.532 Shutdown signal received, stop current app iteration 00:05:14.532 12:14:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.532 12:14:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:14.532 00:05:14.532 real 0m16.473s 00:05:14.532 user 0m36.216s 00:05:14.532 sys 0m2.602s 00:05:14.532 12:14:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.532 12:14:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.532 ************************************ 00:05:14.532 END TEST app_repeat 00:05:14.532 ************************************ 00:05:14.532 12:14:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.532 12:14:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.532 12:14:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.532 12:14:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.532 12:14:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.532 ************************************ 00:05:14.532 START TEST cpu_locks 00:05:14.532 ************************************ 00:05:14.532 12:14:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.791 * Looking for test storage... 00:05:14.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.791 12:14:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.791 --rc genhtml_branch_coverage=1 00:05:14.791 --rc genhtml_function_coverage=1 00:05:14.791 --rc genhtml_legend=1 00:05:14.791 --rc geninfo_all_blocks=1 00:05:14.791 --rc geninfo_unexecuted_blocks=1 00:05:14.791 00:05:14.791 ' 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.791 --rc genhtml_branch_coverage=1 00:05:14.791 --rc genhtml_function_coverage=1 00:05:14.791 --rc genhtml_legend=1 00:05:14.791 --rc geninfo_all_blocks=1 00:05:14.791 --rc geninfo_unexecuted_blocks=1 00:05:14.791 00:05:14.791 ' 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.791 --rc genhtml_branch_coverage=1 00:05:14.791 --rc genhtml_function_coverage=1 00:05:14.791 --rc genhtml_legend=1 00:05:14.791 --rc geninfo_all_blocks=1 00:05:14.791 --rc geninfo_unexecuted_blocks=1 00:05:14.791 00:05:14.791 ' 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.791 --rc genhtml_branch_coverage=1 00:05:14.791 --rc genhtml_function_coverage=1 00:05:14.791 --rc genhtml_legend=1 00:05:14.791 --rc geninfo_all_blocks=1 00:05:14.791 --rc geninfo_unexecuted_blocks=1 00:05:14.791 00:05:14.791 ' 00:05:14.791 12:14:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:14.791 12:14:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:14.791 12:14:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:14.791 12:14:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.791 12:14:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.791 ************************************ 00:05:14.791 START TEST default_locks 00:05:14.791 ************************************ 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=256300 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 256300 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 256300 ']' 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.791 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.792 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.792 12:14:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.792 [2024-11-20 12:14:57.851063] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:14.792 [2024-11-20 12:14:57.851103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256300 ] 00:05:15.051 [2024-11-20 12:14:57.924685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.051 [2024-11-20 12:14:57.967461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.310 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.310 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:15.310 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 256300 00:05:15.310 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 256300 00:05:15.310 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.570 lslocks: write error 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 256300 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 256300 ']' 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 256300 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256300 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256300' 00:05:15.570 killing process with pid 256300 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 256300 00:05:15.570 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 256300 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 256300 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 256300 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 256300 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 256300 ']' 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (256300) - No such process 00:05:15.830 ERROR: process (pid: 256300) is no longer running 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.830 00:05:15.830 real 0m1.088s 00:05:15.830 user 0m1.034s 00:05:15.830 sys 0m0.504s 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.830 12:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.830 ************************************ 00:05:15.830 END TEST default_locks 00:05:15.830 ************************************ 00:05:15.830 12:14:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:15.830 12:14:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.830 12:14:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.830 12:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.090 ************************************ 00:05:16.090 START TEST default_locks_via_rpc 00:05:16.090 ************************************ 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=256556 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 256556 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 256556 ']' 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.090 12:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.090 [2024-11-20 12:14:59.011365] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:16.090 [2024-11-20 12:14:59.011411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256556 ] 00:05:16.090 [2024-11-20 12:14:59.086282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.090 [2024-11-20 12:14:59.125494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.348 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.348 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.348 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:16.348 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 256556 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 256556 00:05:16.349 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.607 12:14:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 256556 00:05:16.607 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 256556 ']' 00:05:16.607 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 256556 00:05:16.607 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:16.607 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.607 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256556 00:05:16.866 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.866 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.866 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256556' 00:05:16.866 killing process with pid 256556 00:05:16.866 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 256556 00:05:16.866 12:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 256556 00:05:17.126 00:05:17.126 real 0m1.075s 00:05:17.126 user 0m1.044s 00:05:17.126 sys 0m0.485s 00:05:17.126 12:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.126 12:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.126 ************************************ 00:05:17.126 END TEST default_locks_via_rpc 00:05:17.126 ************************************ 00:05:17.126 12:15:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:17.126 12:15:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.126 12:15:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.126 12:15:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.126 ************************************ 00:05:17.126 START TEST non_locking_app_on_locked_coremask 00:05:17.126 ************************************ 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=256826 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 256826 /var/tmp/spdk.sock 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 256826 ']' 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.126 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.126 [2024-11-20 12:15:00.153128] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:17.126 [2024-11-20 12:15:00.153175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256826 ] 00:05:17.126 [2024-11-20 12:15:00.229365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.384 [2024-11-20 12:15:00.273315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=256927 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 256927 /var/tmp/spdk2.sock 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 256927 ']' 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.950 12:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.950 [2024-11-20 12:15:01.042318] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:17.950 [2024-11-20 12:15:01.042373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256927 ] 00:05:18.209 [2024-11-20 12:15:01.135806] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.209 [2024-11-20 12:15:01.135835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.209 [2024-11-20 12:15:01.223760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.797 12:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.797 12:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:18.797 12:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 256826 00:05:19.126 12:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.126 12:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 256826 00:05:19.480 lslocks: write error 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 256826 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 256826 ']' 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 256826 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256826 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256826' 00:05:19.480 killing process with pid 256826 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 256826 00:05:19.480 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 256826 00:05:20.144 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 256927 00:05:20.144 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 256927 ']' 00:05:20.144 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 256927 00:05:20.144 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.144 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.144 12:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256927 00:05:20.144 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.144 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.145 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256927' 00:05:20.145 killing process with pid 256927 00:05:20.145 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 256927 00:05:20.145 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 256927 00:05:20.404 00:05:20.404 real 0m3.239s 00:05:20.404 user 0m3.571s 00:05:20.404 sys 0m0.922s 00:05:20.404 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.404 12:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.404 ************************************ 00:05:20.404 END TEST non_locking_app_on_locked_coremask 00:05:20.404 ************************************ 00:05:20.404 12:15:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:20.404 12:15:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.404 12:15:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.404 12:15:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.404 ************************************ 00:05:20.404 START TEST locking_app_on_unlocked_coremask 00:05:20.404 ************************************ 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=257451 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 257451 /var/tmp/spdk.sock 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 257451 ']' 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.404 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.404 [2024-11-20 12:15:03.459051] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:20.404 [2024-11-20 12:15:03.459096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257451 ] 00:05:20.664 [2024-11-20 12:15:03.537731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.664 [2024-11-20 12:15:03.537754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.664 [2024-11-20 12:15:03.580013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=257542 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 257542 /var/tmp/spdk2.sock 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 257542 ']' 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.924 12:15:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.924 [2024-11-20 12:15:03.847174] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:20.924 [2024-11-20 12:15:03.847225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257542 ] 00:05:20.924 [2024-11-20 12:15:03.937932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.924 [2024-11-20 12:15:04.019360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.861 12:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.861 12:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.861 12:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 257542 00:05:21.861 12:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 257542 00:05:21.861 12:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.430 lslocks: write error 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 257451 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 257451 ']' 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 257451 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257451 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257451' 00:05:22.430 killing process with pid 257451 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 257451 00:05:22.430 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 257451 00:05:22.999 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 257542 00:05:22.999 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 257542 ']' 00:05:22.999 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 257542 00:05:22.999 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.999 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.999 12:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257542 00:05:22.999 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.999 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.999 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257542' 00:05:22.999 killing process with pid 257542 00:05:22.999 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 257542 00:05:22.999 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 257542 00:05:23.258 00:05:23.258 real 0m2.911s 00:05:23.258 user 0m3.062s 00:05:23.258 sys 0m0.986s 00:05:23.258 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.258 12:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.258 ************************************ 00:05:23.258 END TEST locking_app_on_unlocked_coremask 00:05:23.258 ************************************ 00:05:23.258 12:15:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:23.258 12:15:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.258 12:15:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.258 12:15:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.518 ************************************ 00:05:23.518 START TEST locking_app_on_locked_coremask 00:05:23.518 ************************************ 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=257952 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 257952 /var/tmp/spdk.sock 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 257952 ']' 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.518 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.518 [2024-11-20 12:15:06.441984] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:23.518 [2024-11-20 12:15:06.442029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257952 ] 00:05:23.518 [2024-11-20 12:15:06.519003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.518 [2024-11-20 12:15:06.561612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=258178 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 258178 /var/tmp/spdk2.sock 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 258178 /var/tmp/spdk2.sock 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 258178 /var/tmp/spdk2.sock 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 258178 ']' 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.778 12:15:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.778 [2024-11-20 12:15:06.819147] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:23.778 [2024-11-20 12:15:06.819197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258178 ] 00:05:24.037 [2024-11-20 12:15:06.905522] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 257952 has claimed it. 00:05:24.037 [2024-11-20 12:15:06.905553] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (258178) - No such process 00:05:24.604 ERROR: process (pid: 258178) is no longer running 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 257952 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 257952 00:05:24.604 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.862 lslocks: write error 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 257952 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 257952 ']' 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 257952 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257952 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257952' 00:05:24.862 killing process with pid 257952 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 257952 00:05:24.862 12:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 257952 00:05:25.121 00:05:25.121 real 0m1.808s 00:05:25.121 user 0m1.924s 00:05:25.121 sys 0m0.600s 00:05:25.121 12:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.121 12:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.121 ************************************ 00:05:25.121 END TEST locking_app_on_locked_coremask 00:05:25.121 ************************************ 00:05:25.121 12:15:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.121 12:15:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.121 12:15:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.121 12:15:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.381 ************************************ 00:05:25.381 START TEST locking_overlapped_coremask 00:05:25.381 ************************************ 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=258523 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 258523 /var/tmp/spdk.sock 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 258523 ']' 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.381 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.381 [2024-11-20 12:15:08.316399] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:25.381 [2024-11-20 12:15:08.316443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258523 ] 00:05:25.381 [2024-11-20 12:15:08.392399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.381 [2024-11-20 12:15:08.440969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.381 [2024-11-20 12:15:08.441003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.381 [2024-11-20 12:15:08.441003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=258610 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 258610 /var/tmp/spdk2.sock 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 258610 /var/tmp/spdk2.sock 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 258610 /var/tmp/spdk2.sock 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 258610 ']' 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.641 12:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.641 [2024-11-20 12:15:08.716998] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:25.641 [2024-11-20 12:15:08.717051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258610 ] 00:05:25.900 [2024-11-20 12:15:08.808936] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 258523 has claimed it. 00:05:25.900 [2024-11-20 12:15:08.808984] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (258610) - No such process 00:05:26.468 ERROR: process (pid: 258610) is no longer running 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 258523 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 258523 ']' 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 258523 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 258523 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 258523' 00:05:26.468 killing process with pid 258523 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 258523 00:05:26.468 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 258523 00:05:26.728 00:05:26.728 real 0m1.454s 00:05:26.728 user 0m4.001s 00:05:26.728 sys 0m0.412s 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.728 ************************************ 00:05:26.728 END TEST locking_overlapped_coremask 00:05:26.728 ************************************ 00:05:26.728 12:15:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:26.728 12:15:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.728 12:15:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.728 12:15:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.728 ************************************ 00:05:26.728 START TEST locking_overlapped_coremask_via_rpc 00:05:26.728 ************************************ 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=259066 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 259066 /var/tmp/spdk.sock 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 259066 ']' 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.728 12:15:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.728 [2024-11-20 12:15:09.839211] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:26.728 [2024-11-20 12:15:09.839254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259066 ] 00:05:26.987 [2024-11-20 12:15:09.915963] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.987 [2024-11-20 12:15:09.915989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.987 [2024-11-20 12:15:09.960753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.987 [2024-11-20 12:15:09.960861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.987 [2024-11-20 12:15:09.960861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=259101 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 259101 /var/tmp/spdk2.sock 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 259101 ']' 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.246 12:15:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.246 [2024-11-20 12:15:10.222416] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:27.246 [2024-11-20 12:15:10.222464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259101 ] 00:05:27.246 [2024-11-20 12:15:10.314827] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.246 [2024-11-20 12:15:10.314850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.505 [2024-11-20 12:15:10.402874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.505 [2024-11-20 12:15:10.402993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.505 [2024-11-20 12:15:10.402994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.072 [2024-11-20 12:15:11.093016] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 259066 has claimed it. 00:05:28.072 request: 00:05:28.072 { 00:05:28.072 "method": "framework_enable_cpumask_locks", 00:05:28.072 "req_id": 1 00:05:28.072 } 00:05:28.072 Got JSON-RPC error response 00:05:28.072 response: 00:05:28.072 { 00:05:28.072 "code": -32603, 00:05:28.072 "message": "Failed to claim CPU core: 2" 00:05:28.072 } 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.072 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 259066 /var/tmp/spdk.sock 00:05:28.073 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 259066 ']' 00:05:28.073 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.073 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.073 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.073 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.073 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 259101 /var/tmp/spdk2.sock 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 259101 ']' 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.331 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.590 00:05:28.590 real 0m1.732s 00:05:28.590 user 0m0.849s 00:05:28.590 sys 0m0.134s 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.590 12:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.590 ************************************ 00:05:28.590 END TEST locking_overlapped_coremask_via_rpc 00:05:28.590 ************************************ 00:05:28.590 12:15:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:28.590 12:15:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 259066 ]] 00:05:28.590 12:15:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 259066 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 259066 ']' 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 259066 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259066 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259066' 00:05:28.590 killing process with pid 259066 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 259066 00:05:28.590 12:15:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 259066 00:05:28.850 12:15:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 259101 ]] 00:05:28.850 12:15:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 259101 00:05:28.850 12:15:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 259101 ']' 00:05:28.850 12:15:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 259101 00:05:28.850 12:15:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:28.850 12:15:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.850 12:15:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259101 00:05:29.109 12:15:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:29.109 12:15:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:29.109 12:15:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259101' 00:05:29.109 killing process with pid 259101 00:05:29.109 12:15:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 259101 00:05:29.109 12:15:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 259101 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 259066 ]] 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 259066 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 259066 ']' 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 259066 00:05:29.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (259066) - No such process 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 259066 is not found' 00:05:29.369 Process with pid 259066 is not found 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 259101 ]] 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 259101 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 259101 ']' 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 259101 00:05:29.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (259101) - No such process 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 259101 is not found' 00:05:29.369 Process with pid 259101 is not found 00:05:29.369 12:15:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.369 00:05:29.369 real 0m14.688s 00:05:29.369 user 0m25.369s 00:05:29.369 sys 0m4.967s 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.369 12:15:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.369 ************************************ 00:05:29.369 END TEST cpu_locks 00:05:29.369 ************************************ 00:05:29.369 00:05:29.369 real 0m39.466s 00:05:29.369 user 1m14.834s 00:05:29.369 sys 0m8.539s 00:05:29.369 12:15:12 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.369 12:15:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.369 ************************************ 00:05:29.369 END TEST event 00:05:29.369 ************************************ 00:05:29.369 12:15:12 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:29.369 12:15:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.369 12:15:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.369 12:15:12 -- common/autotest_common.sh@10 -- # set +x 00:05:29.369 ************************************ 00:05:29.369 START TEST thread 00:05:29.370 ************************************ 00:05:29.370 12:15:12 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:29.370 * Looking for test storage... 00:05:29.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:29.370 12:15:12 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.370 12:15:12 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.370 12:15:12 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.629 12:15:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.629 12:15:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.629 12:15:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.629 12:15:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.629 12:15:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.629 12:15:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.629 12:15:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.629 12:15:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.629 12:15:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.629 12:15:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.629 12:15:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.629 12:15:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:29.629 12:15:12 thread -- scripts/common.sh@345 -- # : 1 00:05:29.629 12:15:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.629 12:15:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.629 12:15:12 thread -- scripts/common.sh@365 -- # decimal 1 00:05:29.629 12:15:12 thread -- scripts/common.sh@353 -- # local d=1 00:05:29.629 12:15:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.629 12:15:12 thread -- scripts/common.sh@355 -- # echo 1 00:05:29.629 12:15:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.629 12:15:12 thread -- scripts/common.sh@366 -- # decimal 2 00:05:29.629 12:15:12 thread -- scripts/common.sh@353 -- # local d=2 00:05:29.629 12:15:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.629 12:15:12 thread -- scripts/common.sh@355 -- # echo 2 00:05:29.629 12:15:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.629 12:15:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.629 12:15:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.629 12:15:12 thread -- scripts/common.sh@368 -- # return 0 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.629 --rc genhtml_branch_coverage=1 00:05:29.629 --rc genhtml_function_coverage=1 00:05:29.629 --rc genhtml_legend=1 00:05:29.629 --rc geninfo_all_blocks=1 00:05:29.629 --rc geninfo_unexecuted_blocks=1 00:05:29.629 00:05:29.629 ' 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.629 --rc genhtml_branch_coverage=1 00:05:29.629 --rc genhtml_function_coverage=1 00:05:29.629 --rc genhtml_legend=1 00:05:29.629 --rc geninfo_all_blocks=1 00:05:29.629 --rc geninfo_unexecuted_blocks=1 00:05:29.629 00:05:29.629 ' 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.629 --rc genhtml_branch_coverage=1 00:05:29.629 --rc genhtml_function_coverage=1 00:05:29.629 --rc genhtml_legend=1 00:05:29.629 --rc geninfo_all_blocks=1 00:05:29.629 --rc geninfo_unexecuted_blocks=1 00:05:29.629 00:05:29.629 ' 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.629 --rc genhtml_branch_coverage=1 00:05:29.629 --rc genhtml_function_coverage=1 00:05:29.629 --rc genhtml_legend=1 00:05:29.629 --rc geninfo_all_blocks=1 00:05:29.629 --rc geninfo_unexecuted_blocks=1 00:05:29.629 00:05:29.629 ' 00:05:29.629 12:15:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.629 12:15:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.629 ************************************ 00:05:29.629 START TEST thread_poller_perf 00:05:29.629 ************************************ 00:05:29.629 12:15:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:29.629 [2024-11-20 12:15:12.614393] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:29.629 [2024-11-20 12:15:12.614458] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259669 ] 00:05:29.629 [2024-11-20 12:15:12.693831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.629 [2024-11-20 12:15:12.734539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.629 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:31.008 [2024-11-20T11:15:14.124Z] ====================================== 00:05:31.008 [2024-11-20T11:15:14.124Z] busy:2307796800 (cyc) 00:05:31.008 [2024-11-20T11:15:14.124Z] total_run_count: 400000 00:05:31.008 [2024-11-20T11:15:14.124Z] tsc_hz: 2300000000 (cyc) 00:05:31.008 [2024-11-20T11:15:14.124Z] ====================================== 00:05:31.008 [2024-11-20T11:15:14.124Z] poller_cost: 5769 (cyc), 2508 (nsec) 00:05:31.008 00:05:31.008 real 0m1.186s 00:05:31.008 user 0m1.107s 00:05:31.008 sys 0m0.075s 00:05:31.008 12:15:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.008 12:15:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.008 ************************************ 00:05:31.008 END TEST thread_poller_perf 00:05:31.008 ************************************ 00:05:31.008 12:15:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.008 12:15:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:31.008 12:15:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.008 12:15:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.008 ************************************ 00:05:31.008 START TEST thread_poller_perf 00:05:31.008 ************************************ 00:05:31.008 12:15:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.008 [2024-11-20 12:15:13.870474] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:31.008 [2024-11-20 12:15:13.870543] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259916 ] 00:05:31.008 [2024-11-20 12:15:13.949766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.008 [2024-11-20 12:15:13.990486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.008 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:31.946 [2024-11-20T11:15:15.062Z] ====================================== 00:05:31.946 [2024-11-20T11:15:15.062Z] busy:2301662878 (cyc) 00:05:31.946 [2024-11-20T11:15:15.062Z] total_run_count: 5320000 00:05:31.946 [2024-11-20T11:15:15.062Z] tsc_hz: 2300000000 (cyc) 00:05:31.946 [2024-11-20T11:15:15.062Z] ====================================== 00:05:31.946 [2024-11-20T11:15:15.062Z] poller_cost: 432 (cyc), 187 (nsec) 00:05:31.946 00:05:31.946 real 0m1.184s 00:05:31.946 user 0m1.108s 00:05:31.946 sys 0m0.073s 00:05:31.946 12:15:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.946 12:15:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.946 ************************************ 00:05:31.946 END TEST thread_poller_perf 00:05:31.946 ************************************ 00:05:32.205 12:15:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:32.205 00:05:32.205 real 0m2.681s 00:05:32.205 user 0m2.370s 00:05:32.205 sys 0m0.326s 00:05:32.205 12:15:15 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.205 12:15:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.205 ************************************ 00:05:32.205 END TEST thread 00:05:32.205 ************************************ 00:05:32.205 12:15:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:32.205 12:15:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:32.205 12:15:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.205 12:15:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.205 12:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:32.205 ************************************ 00:05:32.205 START TEST app_cmdline 00:05:32.205 ************************************ 00:05:32.205 12:15:15 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:32.205 * Looking for test storage... 00:05:32.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:32.205 12:15:15 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.205 12:15:15 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.205 12:15:15 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.205 12:15:15 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:32.205 12:15:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.206 12:15:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.206 --rc genhtml_branch_coverage=1 00:05:32.206 --rc genhtml_function_coverage=1 00:05:32.206 --rc genhtml_legend=1 00:05:32.206 --rc geninfo_all_blocks=1 00:05:32.206 --rc geninfo_unexecuted_blocks=1 00:05:32.206 00:05:32.206 ' 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.206 --rc genhtml_branch_coverage=1 00:05:32.206 --rc genhtml_function_coverage=1 00:05:32.206 --rc genhtml_legend=1 00:05:32.206 --rc geninfo_all_blocks=1 00:05:32.206 --rc geninfo_unexecuted_blocks=1 00:05:32.206 00:05:32.206 ' 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.206 --rc genhtml_branch_coverage=1 00:05:32.206 --rc genhtml_function_coverage=1 00:05:32.206 --rc genhtml_legend=1 00:05:32.206 --rc geninfo_all_blocks=1 00:05:32.206 --rc geninfo_unexecuted_blocks=1 00:05:32.206 00:05:32.206 ' 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.206 --rc genhtml_branch_coverage=1 00:05:32.206 --rc genhtml_function_coverage=1 00:05:32.206 --rc genhtml_legend=1 00:05:32.206 --rc geninfo_all_blocks=1 00:05:32.206 --rc geninfo_unexecuted_blocks=1 00:05:32.206 00:05:32.206 ' 00:05:32.206 12:15:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:32.206 12:15:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=260215 00:05:32.206 12:15:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 260215 00:05:32.206 12:15:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 260215 ']' 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.206 12:15:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:32.465 [2024-11-20 12:15:15.368686] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:32.465 [2024-11-20 12:15:15.368733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260215 ] 00:05:32.465 [2024-11-20 12:15:15.443456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.465 [2024-11-20 12:15:15.485951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.724 12:15:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.725 12:15:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:32.725 12:15:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:32.984 { 00:05:32.984 "version": "SPDK v25.01-pre git sha1 0383e688b", 00:05:32.984 "fields": { 00:05:32.984 "major": 25, 00:05:32.984 "minor": 1, 00:05:32.984 "patch": 0, 00:05:32.984 "suffix": "-pre", 00:05:32.984 "commit": "0383e688b" 00:05:32.984 } 00:05:32.984 } 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:32.984 12:15:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:32.984 12:15:15 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:33.242 request: 00:05:33.242 { 00:05:33.242 "method": "env_dpdk_get_mem_stats", 00:05:33.242 "req_id": 1 00:05:33.243 } 00:05:33.243 Got JSON-RPC error response 00:05:33.243 response: 00:05:33.243 { 00:05:33.243 "code": -32601, 00:05:33.243 "message": "Method not found" 00:05:33.243 } 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.243 12:15:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 260215 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 260215 ']' 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 260215 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260215 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260215' 00:05:33.243 killing process with pid 260215 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 260215 00:05:33.243 12:15:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 260215 00:05:33.501 00:05:33.501 real 0m1.360s 00:05:33.501 user 0m1.590s 00:05:33.501 sys 0m0.457s 00:05:33.501 12:15:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.501 12:15:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:33.502 ************************************ 00:05:33.502 END TEST app_cmdline 00:05:33.502 ************************************ 00:05:33.502 12:15:16 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:33.502 12:15:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.502 12:15:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.502 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.502 ************************************ 00:05:33.502 START TEST version 00:05:33.502 ************************************ 00:05:33.502 12:15:16 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:33.761 * Looking for test storage... 00:05:33.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.761 12:15:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.761 12:15:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.761 12:15:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.761 12:15:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.761 12:15:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.761 12:15:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.761 12:15:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.761 12:15:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.761 12:15:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.761 12:15:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.761 12:15:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.761 12:15:16 version -- scripts/common.sh@344 -- # case "$op" in 00:05:33.761 12:15:16 version -- scripts/common.sh@345 -- # : 1 00:05:33.761 12:15:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.761 12:15:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.761 12:15:16 version -- scripts/common.sh@365 -- # decimal 1 00:05:33.761 12:15:16 version -- scripts/common.sh@353 -- # local d=1 00:05:33.761 12:15:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.761 12:15:16 version -- scripts/common.sh@355 -- # echo 1 00:05:33.761 12:15:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.761 12:15:16 version -- scripts/common.sh@366 -- # decimal 2 00:05:33.761 12:15:16 version -- scripts/common.sh@353 -- # local d=2 00:05:33.761 12:15:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.761 12:15:16 version -- scripts/common.sh@355 -- # echo 2 00:05:33.761 12:15:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.761 12:15:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.761 12:15:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.761 12:15:16 version -- scripts/common.sh@368 -- # return 0 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.761 --rc genhtml_branch_coverage=1 00:05:33.761 --rc genhtml_function_coverage=1 00:05:33.761 --rc genhtml_legend=1 00:05:33.761 --rc geninfo_all_blocks=1 00:05:33.761 --rc geninfo_unexecuted_blocks=1 00:05:33.761 00:05:33.761 ' 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.761 --rc genhtml_branch_coverage=1 00:05:33.761 --rc genhtml_function_coverage=1 00:05:33.761 --rc genhtml_legend=1 00:05:33.761 --rc geninfo_all_blocks=1 00:05:33.761 --rc geninfo_unexecuted_blocks=1 00:05:33.761 00:05:33.761 ' 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.761 --rc genhtml_branch_coverage=1 00:05:33.761 --rc genhtml_function_coverage=1 00:05:33.761 --rc genhtml_legend=1 00:05:33.761 --rc geninfo_all_blocks=1 00:05:33.761 --rc geninfo_unexecuted_blocks=1 00:05:33.761 00:05:33.761 ' 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.761 --rc genhtml_branch_coverage=1 00:05:33.761 --rc genhtml_function_coverage=1 00:05:33.761 --rc genhtml_legend=1 00:05:33.761 --rc geninfo_all_blocks=1 00:05:33.761 --rc geninfo_unexecuted_blocks=1 00:05:33.761 00:05:33.761 ' 00:05:33.761 12:15:16 version -- app/version.sh@17 -- # get_header_version major 00:05:33.761 12:15:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # cut -f2 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.761 12:15:16 version -- app/version.sh@17 -- # major=25 00:05:33.761 12:15:16 version -- app/version.sh@18 -- # get_header_version minor 00:05:33.761 12:15:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # cut -f2 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.761 12:15:16 version -- app/version.sh@18 -- # minor=1 00:05:33.761 12:15:16 version -- app/version.sh@19 -- # get_header_version patch 00:05:33.761 12:15:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # cut -f2 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.761 12:15:16 version -- app/version.sh@19 -- # patch=0 00:05:33.761 12:15:16 version -- app/version.sh@20 -- # get_header_version suffix 00:05:33.761 12:15:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # cut -f2 00:05:33.761 12:15:16 version -- app/version.sh@14 -- # tr -d '"' 00:05:33.761 12:15:16 version -- app/version.sh@20 -- # suffix=-pre 00:05:33.761 12:15:16 version -- app/version.sh@22 -- # version=25.1 00:05:33.761 12:15:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:33.761 12:15:16 version -- app/version.sh@28 -- # version=25.1rc0 00:05:33.761 12:15:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:33.761 12:15:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:33.761 12:15:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:33.761 12:15:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:33.761 00:05:33.761 real 0m0.239s 00:05:33.761 user 0m0.151s 00:05:33.761 sys 0m0.131s 00:05:33.761 12:15:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.761 12:15:16 version -- common/autotest_common.sh@10 -- # set +x 00:05:33.761 ************************************ 00:05:33.761 END TEST version 00:05:33.761 ************************************ 00:05:33.761 12:15:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:33.762 12:15:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:33.762 12:15:16 -- spdk/autotest.sh@194 -- # uname -s 00:05:33.762 12:15:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:33.762 12:15:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:33.762 12:15:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:33.762 12:15:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:33.762 12:15:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:33.762 12:15:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:33.762 12:15:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.762 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.021 12:15:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:34.021 12:15:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:34.021 12:15:16 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:34.021 12:15:16 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:34.021 12:15:16 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:34.021 12:15:16 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:34.021 12:15:16 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:34.021 12:15:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.021 12:15:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.021 12:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.021 ************************************ 00:05:34.021 START TEST nvmf_tcp 00:05:34.021 ************************************ 00:05:34.021 12:15:16 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:34.021 * Looking for test storage... 00:05:34.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:34.021 12:15:17 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.021 12:15:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.021 12:15:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.021 12:15:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:34.021 12:15:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.022 12:15:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.022 --rc genhtml_branch_coverage=1 00:05:34.022 --rc genhtml_function_coverage=1 00:05:34.022 --rc genhtml_legend=1 00:05:34.022 --rc geninfo_all_blocks=1 00:05:34.022 --rc geninfo_unexecuted_blocks=1 00:05:34.022 00:05:34.022 ' 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.022 --rc genhtml_branch_coverage=1 00:05:34.022 --rc genhtml_function_coverage=1 00:05:34.022 --rc genhtml_legend=1 00:05:34.022 --rc geninfo_all_blocks=1 00:05:34.022 --rc geninfo_unexecuted_blocks=1 00:05:34.022 00:05:34.022 ' 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.022 --rc genhtml_branch_coverage=1 00:05:34.022 --rc genhtml_function_coverage=1 00:05:34.022 --rc genhtml_legend=1 00:05:34.022 --rc geninfo_all_blocks=1 00:05:34.022 --rc geninfo_unexecuted_blocks=1 00:05:34.022 00:05:34.022 ' 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.022 --rc genhtml_branch_coverage=1 00:05:34.022 --rc genhtml_function_coverage=1 00:05:34.022 --rc genhtml_legend=1 00:05:34.022 --rc geninfo_all_blocks=1 00:05:34.022 --rc geninfo_unexecuted_blocks=1 00:05:34.022 00:05:34.022 ' 00:05:34.022 12:15:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:34.022 12:15:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:34.022 12:15:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.022 12:15:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.022 ************************************ 00:05:34.022 START TEST nvmf_target_core 00:05:34.022 ************************************ 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:34.282 * Looking for test storage... 00:05:34.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.282 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.282 --rc genhtml_branch_coverage=1 00:05:34.282 --rc genhtml_function_coverage=1 00:05:34.283 --rc genhtml_legend=1 00:05:34.283 --rc geninfo_all_blocks=1 00:05:34.283 --rc geninfo_unexecuted_blocks=1 00:05:34.283 00:05:34.283 ' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.283 --rc genhtml_branch_coverage=1 00:05:34.283 --rc genhtml_function_coverage=1 00:05:34.283 --rc genhtml_legend=1 00:05:34.283 --rc geninfo_all_blocks=1 00:05:34.283 --rc geninfo_unexecuted_blocks=1 00:05:34.283 00:05:34.283 ' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.283 --rc genhtml_branch_coverage=1 00:05:34.283 --rc genhtml_function_coverage=1 00:05:34.283 --rc genhtml_legend=1 00:05:34.283 --rc geninfo_all_blocks=1 00:05:34.283 --rc geninfo_unexecuted_blocks=1 00:05:34.283 00:05:34.283 ' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.283 --rc genhtml_branch_coverage=1 00:05:34.283 --rc genhtml_function_coverage=1 00:05:34.283 --rc genhtml_legend=1 00:05:34.283 --rc geninfo_all_blocks=1 00:05:34.283 --rc geninfo_unexecuted_blocks=1 00:05:34.283 00:05:34.283 ' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:34.283 ************************************ 00:05:34.283 START TEST nvmf_abort 00:05:34.283 ************************************ 00:05:34.283 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:34.543 * Looking for test storage... 00:05:34.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:34.543 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.543 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.543 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.543 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.543 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.543 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.544 --rc genhtml_branch_coverage=1 00:05:34.544 --rc genhtml_function_coverage=1 00:05:34.544 --rc genhtml_legend=1 00:05:34.544 --rc geninfo_all_blocks=1 00:05:34.544 --rc geninfo_unexecuted_blocks=1 00:05:34.544 00:05:34.544 ' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.544 --rc genhtml_branch_coverage=1 00:05:34.544 --rc genhtml_function_coverage=1 00:05:34.544 --rc genhtml_legend=1 00:05:34.544 --rc geninfo_all_blocks=1 00:05:34.544 --rc geninfo_unexecuted_blocks=1 00:05:34.544 00:05:34.544 ' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.544 --rc genhtml_branch_coverage=1 00:05:34.544 --rc genhtml_function_coverage=1 00:05:34.544 --rc genhtml_legend=1 00:05:34.544 --rc geninfo_all_blocks=1 00:05:34.544 --rc geninfo_unexecuted_blocks=1 00:05:34.544 00:05:34.544 ' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.544 --rc genhtml_branch_coverage=1 00:05:34.544 --rc genhtml_function_coverage=1 00:05:34.544 --rc genhtml_legend=1 00:05:34.544 --rc geninfo_all_blocks=1 00:05:34.544 --rc geninfo_unexecuted_blocks=1 00:05:34.544 00:05:34.544 ' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.544 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:34.545 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:41.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:41.120 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:41.120 Found net devices under 0000:86:00.0: cvl_0_0 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:41.120 Found net devices under 0000:86:00.1: cvl_0_1 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:41.120 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:41.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:41.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:05:41.121 00:05:41.121 --- 10.0.0.2 ping statistics --- 00:05:41.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.121 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:41.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:41.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:05:41.121 00:05:41.121 --- 10.0.0.1 ping statistics --- 00:05:41.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.121 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=263895 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 263895 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 263895 ']' 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 [2024-11-20 12:15:23.583354] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:41.121 [2024-11-20 12:15:23.583402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:41.121 [2024-11-20 12:15:23.660936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.121 [2024-11-20 12:15:23.703260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:41.121 [2024-11-20 12:15:23.703300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:41.121 [2024-11-20 12:15:23.703307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.121 [2024-11-20 12:15:23.703313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.121 [2024-11-20 12:15:23.703318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:41.121 [2024-11-20 12:15:23.704755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.121 [2024-11-20 12:15:23.704842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.121 [2024-11-20 12:15:23.704843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 [2024-11-20 12:15:23.853332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 Malloc0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 Delay0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 [2024-11-20 12:15:23.929638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.121 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:41.121 [2024-11-20 12:15:24.066296] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:43.029 Initializing NVMe Controllers 00:05:43.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:43.029 controller IO queue size 128 less than required 00:05:43.029 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:43.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:43.029 Initialization complete. Launching workers. 00:05:43.029 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36420 00:05:43.029 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36481, failed to submit 62 00:05:43.029 success 36424, unsuccessful 57, failed 0 00:05:43.029 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:43.029 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.029 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:43.288 rmmod nvme_tcp 00:05:43.288 rmmod nvme_fabrics 00:05:43.288 rmmod nvme_keyring 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:43.288 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 263895 ']' 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 263895 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 263895 ']' 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 263895 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263895 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263895' 00:05:43.289 killing process with pid 263895 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 263895 00:05:43.289 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 263895 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.548 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.458 00:05:45.458 real 0m11.134s 00:05:45.458 user 0m11.588s 00:05:45.458 sys 0m5.393s 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:45.458 ************************************ 00:05:45.458 END TEST nvmf_abort 00:05:45.458 ************************************ 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.458 12:15:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.717 ************************************ 00:05:45.717 START TEST nvmf_ns_hotplug_stress 00:05:45.717 ************************************ 00:05:45.717 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:45.717 * Looking for test storage... 00:05:45.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.717 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.717 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.717 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.717 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.717 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.718 --rc genhtml_branch_coverage=1 00:05:45.718 --rc genhtml_function_coverage=1 00:05:45.718 --rc genhtml_legend=1 00:05:45.718 --rc geninfo_all_blocks=1 00:05:45.718 --rc geninfo_unexecuted_blocks=1 00:05:45.718 00:05:45.718 ' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.718 --rc genhtml_branch_coverage=1 00:05:45.718 --rc genhtml_function_coverage=1 00:05:45.718 --rc genhtml_legend=1 00:05:45.718 --rc geninfo_all_blocks=1 00:05:45.718 --rc geninfo_unexecuted_blocks=1 00:05:45.718 00:05:45.718 ' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.718 --rc genhtml_branch_coverage=1 00:05:45.718 --rc genhtml_function_coverage=1 00:05:45.718 --rc genhtml_legend=1 00:05:45.718 --rc geninfo_all_blocks=1 00:05:45.718 --rc geninfo_unexecuted_blocks=1 00:05:45.718 00:05:45.718 ' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.718 --rc genhtml_branch_coverage=1 00:05:45.718 --rc genhtml_function_coverage=1 00:05:45.718 --rc genhtml_legend=1 00:05:45.718 --rc geninfo_all_blocks=1 00:05:45.718 --rc geninfo_unexecuted_blocks=1 00:05:45.718 00:05:45.718 ' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.718 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.719 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.292 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:52.293 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:52.293 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:52.293 Found net devices under 0000:86:00.0: cvl_0_0 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:52.293 Found net devices under 0000:86:00.1: cvl_0_1 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:05:52.293 00:05:52.293 --- 10.0.0.2 ping statistics --- 00:05:52.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.293 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:05:52.293 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:05:52.294 00:05:52.294 --- 10.0.0.1 ping statistics --- 00:05:52.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.294 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=267914 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 267914 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 267914 ']' 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.294 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.294 [2024-11-20 12:15:34.881065] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:05:52.294 [2024-11-20 12:15:34.881119] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.294 [2024-11-20 12:15:34.961730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.294 [2024-11-20 12:15:35.003737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.294 [2024-11-20 12:15:35.003773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.294 [2024-11-20 12:15:35.003781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.294 [2024-11-20 12:15:35.003787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.294 [2024-11-20 12:15:35.003792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.294 [2024-11-20 12:15:35.005203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.294 [2024-11-20 12:15:35.005314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.294 [2024-11-20 12:15:35.005314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:52.294 [2024-11-20 12:15:35.306083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.294 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:52.553 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:52.811 [2024-11-20 12:15:35.707524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:52.812 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.071 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:53.071 Malloc0 00:05:53.071 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:53.331 Delay0 00:05:53.331 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.590 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:53.849 NULL1 00:05:53.849 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:53.849 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:53.849 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=268186 00:05:53.849 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:53.849 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.227 Read completed with error (sct=0, sc=11) 00:05:55.227 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.486 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:55.486 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:55.486 true 00:05:55.486 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:55.486 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.424 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.789 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:56.789 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:56.789 true 00:05:56.789 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:56.789 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.047 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.048 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:57.048 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:57.306 true 00:05:57.306 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:57.306 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.686 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.686 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:58.686 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:58.686 true 00:05:58.686 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:58.687 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.945 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.204 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:59.204 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:59.463 true 00:05:59.463 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:59.463 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.463 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.722 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:59.722 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:59.981 true 00:05:59.981 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:05:59.981 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.240 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.240 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:00.240 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:00.499 true 00:06:00.499 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:00.499 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.877 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.877 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:01.877 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:02.136 true 00:06:02.136 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:02.136 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.394 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.653 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:02.653 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:02.653 true 00:06:02.653 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:02.653 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.031 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.031 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:04.031 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:04.290 true 00:06:04.290 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:04.290 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.227 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.227 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:05.227 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:05.485 true 00:06:05.485 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:05.485 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.745 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.003 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:06.004 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:06.004 true 00:06:06.004 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:06.261 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.200 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.459 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:07.459 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:07.718 true 00:06:07.718 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:07.718 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.977 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.977 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:07.977 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:08.236 true 00:06:08.236 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:08.236 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.613 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.613 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:09.613 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:09.872 true 00:06:09.872 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:09.872 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.810 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.810 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:10.810 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:11.069 true 00:06:11.069 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:11.069 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.328 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.587 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:11.587 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:11.587 true 00:06:11.587 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:11.587 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.965 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.965 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.965 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:12.965 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:13.223 true 00:06:13.223 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:13.223 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.158 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.158 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.158 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:14.158 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:14.418 true 00:06:14.418 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:14.418 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.677 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.677 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:14.677 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:14.936 true 00:06:14.936 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:14.936 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.314 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.314 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:16.314 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:16.572 true 00:06:16.572 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:16.572 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.509 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.509 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:17.509 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:17.768 true 00:06:17.768 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:17.768 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.027 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.027 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:18.027 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:18.286 true 00:06:18.286 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:18.286 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.664 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:19.664 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:19.923 true 00:06:19.923 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:19.923 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.860 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.860 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:20.860 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:21.119 true 00:06:21.119 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:21.119 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.378 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.378 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:21.378 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:21.636 true 00:06:21.636 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:21.636 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.013 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.013 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:23.013 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:23.272 true 00:06:23.272 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:23.272 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.205 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.205 Initializing NVMe Controllers 00:06:24.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:24.205 Controller IO queue size 128, less than required. 00:06:24.205 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:24.205 Controller IO queue size 128, less than required. 00:06:24.205 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:24.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:24.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:24.205 Initialization complete. Launching workers. 00:06:24.205 ======================================================== 00:06:24.205 Latency(us) 00:06:24.205 Device Information : IOPS MiB/s Average min max 00:06:24.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1916.89 0.94 43699.67 2987.43 1012373.75 00:06:24.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16674.28 8.14 7676.03 1636.06 461010.82 00:06:24.205 ======================================================== 00:06:24.205 Total : 18591.17 9.08 11390.34 1636.06 1012373.75 00:06:24.205 00:06:24.205 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:24.205 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:24.462 true 00:06:24.462 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 268186 00:06:24.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (268186) - No such process 00:06:24.463 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 268186 00:06:24.463 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.720 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:24.979 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:24.979 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:24.979 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:24.979 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.979 12:16:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:24.979 null0 00:06:24.979 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:24.979 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:24.979 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:25.238 null1 00:06:25.238 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.238 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.238 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:25.497 null2 00:06:25.497 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.497 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.497 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:25.497 null3 00:06:25.755 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.755 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.755 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:25.755 null4 00:06:25.755 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:25.755 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.755 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:26.014 null5 00:06:26.014 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.014 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.014 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:26.272 null6 00:06:26.272 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.272 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.272 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:26.532 null7 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 273789 273791 273792 273794 273796 273798 273800 273802 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:26.532 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:26.792 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.052 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.311 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.570 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.829 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.830 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.830 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.830 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.090 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.349 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.608 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.609 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.868 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.869 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.869 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.869 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.869 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.128 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.387 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.388 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.646 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.905 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.906 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.906 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.906 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.906 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.906 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.906 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.906 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.165 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.424 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:30.683 rmmod nvme_tcp 00:06:30.683 rmmod nvme_fabrics 00:06:30.683 rmmod nvme_keyring 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 267914 ']' 00:06:30.683 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 267914 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 267914 ']' 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 267914 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267914 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267914' 00:06:30.684 killing process with pid 267914 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 267914 00:06:30.684 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 267914 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.943 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.479 12:16:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:33.479 00:06:33.479 real 0m47.414s 00:06:33.479 user 3m12.672s 00:06:33.479 sys 0m15.750s 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.479 ************************************ 00:06:33.479 END TEST nvmf_ns_hotplug_stress 00:06:33.479 ************************************ 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.479 ************************************ 00:06:33.479 START TEST nvmf_delete_subsystem 00:06:33.479 ************************************ 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:33.479 * Looking for test storage... 00:06:33.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.479 --rc genhtml_branch_coverage=1 00:06:33.479 --rc genhtml_function_coverage=1 00:06:33.479 --rc genhtml_legend=1 00:06:33.479 --rc geninfo_all_blocks=1 00:06:33.479 --rc geninfo_unexecuted_blocks=1 00:06:33.479 00:06:33.479 ' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.479 --rc genhtml_branch_coverage=1 00:06:33.479 --rc genhtml_function_coverage=1 00:06:33.479 --rc genhtml_legend=1 00:06:33.479 --rc geninfo_all_blocks=1 00:06:33.479 --rc geninfo_unexecuted_blocks=1 00:06:33.479 00:06:33.479 ' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.479 --rc genhtml_branch_coverage=1 00:06:33.479 --rc genhtml_function_coverage=1 00:06:33.479 --rc genhtml_legend=1 00:06:33.479 --rc geninfo_all_blocks=1 00:06:33.479 --rc geninfo_unexecuted_blocks=1 00:06:33.479 00:06:33.479 ' 00:06:33.479 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.479 --rc genhtml_branch_coverage=1 00:06:33.480 --rc genhtml_function_coverage=1 00:06:33.480 --rc genhtml_legend=1 00:06:33.480 --rc geninfo_all_blocks=1 00:06:33.480 --rc geninfo_unexecuted_blocks=1 00:06:33.480 00:06:33.480 ' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.480 12:16:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.053 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:40.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:40.054 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:40.054 Found net devices under 0000:86:00.0: cvl_0_0 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:40.054 Found net devices under 0000:86:00.1: cvl_0_1 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:40.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:06:40.054 00:06:40.054 --- 10.0.0.2 ping statistics --- 00:06:40.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.054 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:06:40.054 00:06:40.054 --- 10.0.0.1 ping statistics --- 00:06:40.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.054 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=278187 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 278187 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 278187 ']' 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.054 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 [2024-11-20 12:16:22.351736] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:40.055 [2024-11-20 12:16:22.351785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.055 [2024-11-20 12:16:22.430059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.055 [2024-11-20 12:16:22.471887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.055 [2024-11-20 12:16:22.471922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.055 [2024-11-20 12:16:22.471929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.055 [2024-11-20 12:16:22.471935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.055 [2024-11-20 12:16:22.471940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.055 [2024-11-20 12:16:22.473156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.055 [2024-11-20 12:16:22.473159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 [2024-11-20 12:16:22.608727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 [2024-11-20 12:16:22.632932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 NULL1 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 Delay0 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=278250 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:40.055 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:40.055 [2024-11-20 12:16:22.749754] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:41.960 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:41.960 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.960 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 [2024-11-20 12:16:24.789294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16962c0 is same with the state(6) to be set 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 starting I/O failed: -6 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Write completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.960 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 starting I/O failed: -6 00:06:41.961 [2024-11-20 12:16:24.789822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb7400d4d0 is same with the state(6) to be set 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Write completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 Read completed with error (sct=0, sc=8) 00:06:41.961 [2024-11-20 12:16:24.790028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16964a0 is same with the state(6) to be set 00:06:42.898 [2024-11-20 12:16:25.762363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16979a0 is same with the state(6) to be set 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 [2024-11-20 12:16:25.791013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb7400d020 is same with the state(6) to be set 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 [2024-11-20 12:16:25.791184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb7400d800 is same with the state(6) to be set 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 [2024-11-20 12:16:25.791301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696680 is same with the state(6) to be set 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Read completed with error (sct=0, sc=8) 00:06:42.898 Write completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Write completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 Read completed with error (sct=0, sc=8) 00:06:42.899 [2024-11-20 12:16:25.791819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb74000c40 is same with the state(6) to be set 00:06:42.899 Initializing NVMe Controllers 00:06:42.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.899 Controller IO queue size 128, less than required. 00:06:42.899 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:42.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:42.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:42.899 Initialization complete. Launching workers. 00:06:42.899 ======================================================== 00:06:42.899 Latency(us) 00:06:42.899 Device Information : IOPS MiB/s Average min max 00:06:42.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.48 0.08 898791.98 421.53 2002012.64 00:06:42.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.37 0.08 1075849.24 1743.19 2002193.64 00:06:42.899 ======================================================== 00:06:42.899 Total : 326.85 0.16 992164.12 421.53 2002193.64 00:06:42.899 00:06:42.899 [2024-11-20 12:16:25.792463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16979a0 (9): Bad file descriptor 00:06:42.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:42.899 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.899 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:42.899 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 278250 00:06:42.899 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:43.467 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:43.467 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 278250 00:06:43.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (278250) - No such process 00:06:43.467 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 278250 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 278250 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 278250 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.468 [2024-11-20 12:16:26.323020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=278905 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:43.468 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:43.468 [2024-11-20 12:16:26.412532] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:43.727 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:43.727 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:43.727 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.295 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.295 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:44.295 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:44.864 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:44.864 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:44.864 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.432 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.432 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:45.432 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:45.999 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:45.999 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:46.000 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.257 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.257 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:46.257 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.516 Initializing NVMe Controllers 00:06:46.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.516 Controller IO queue size 128, less than required. 00:06:46.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:46.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:46.516 Initialization complete. Launching workers. 00:06:46.516 ======================================================== 00:06:46.516 Latency(us) 00:06:46.516 Device Information : IOPS MiB/s Average min max 00:06:46.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002136.24 1000129.81 1007294.97 00:06:46.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003847.96 1000193.63 1010047.48 00:06:46.516 ======================================================== 00:06:46.516 Total : 256.00 0.12 1002992.10 1000129.81 1010047.48 00:06:46.516 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 278905 00:06:46.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (278905) - No such process 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 278905 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.776 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.776 rmmod nvme_tcp 00:06:47.058 rmmod nvme_fabrics 00:06:47.058 rmmod nvme_keyring 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 278187 ']' 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 278187 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 278187 ']' 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 278187 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.058 12:16:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278187 00:06:47.058 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.058 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.058 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278187' 00:06:47.058 killing process with pid 278187 00:06:47.058 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 278187 00:06:47.058 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 278187 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.392 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.299 00:06:49.299 real 0m16.157s 00:06:49.299 user 0m29.028s 00:06:49.299 sys 0m5.500s 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.299 ************************************ 00:06:49.299 END TEST nvmf_delete_subsystem 00:06:49.299 ************************************ 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.299 ************************************ 00:06:49.299 START TEST nvmf_host_management 00:06:49.299 ************************************ 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:49.299 * Looking for test storage... 00:06:49.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.299 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.559 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.559 --rc genhtml_branch_coverage=1 00:06:49.559 --rc genhtml_function_coverage=1 00:06:49.559 --rc genhtml_legend=1 00:06:49.559 --rc geninfo_all_blocks=1 00:06:49.559 --rc geninfo_unexecuted_blocks=1 00:06:49.559 00:06:49.559 ' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.560 --rc genhtml_branch_coverage=1 00:06:49.560 --rc genhtml_function_coverage=1 00:06:49.560 --rc genhtml_legend=1 00:06:49.560 --rc geninfo_all_blocks=1 00:06:49.560 --rc geninfo_unexecuted_blocks=1 00:06:49.560 00:06:49.560 ' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.560 --rc genhtml_branch_coverage=1 00:06:49.560 --rc genhtml_function_coverage=1 00:06:49.560 --rc genhtml_legend=1 00:06:49.560 --rc geninfo_all_blocks=1 00:06:49.560 --rc geninfo_unexecuted_blocks=1 00:06:49.560 00:06:49.560 ' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.560 --rc genhtml_branch_coverage=1 00:06:49.560 --rc genhtml_function_coverage=1 00:06:49.560 --rc genhtml_legend=1 00:06:49.560 --rc geninfo_all_blocks=1 00:06:49.560 --rc geninfo_unexecuted_blocks=1 00:06:49.560 00:06:49.560 ' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.560 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:56.135 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:56.135 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:56.135 Found net devices under 0000:86:00.0: cvl_0_0 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:56.135 Found net devices under 0000:86:00.1: cvl_0_1 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:56.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:06:56.135 00:06:56.135 --- 10.0.0.2 ping statistics --- 00:06:56.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.135 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:06:56.135 00:06:56.135 --- 10.0.0.1 ping statistics --- 00:06:56.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.135 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=283137 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 283137 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 283137 ']' 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.135 [2024-11-20 12:16:38.634712] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:56.136 [2024-11-20 12:16:38.634755] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.136 [2024-11-20 12:16:38.713861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.136 [2024-11-20 12:16:38.757445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.136 [2024-11-20 12:16:38.757484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.136 [2024-11-20 12:16:38.757491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.136 [2024-11-20 12:16:38.757497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.136 [2024-11-20 12:16:38.757502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.136 [2024-11-20 12:16:38.759013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.136 [2024-11-20 12:16:38.759120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.136 [2024-11-20 12:16:38.759229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.136 [2024-11-20 12:16:38.759230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.136 [2024-11-20 12:16:38.896451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.136 Malloc0 00:06:56.136 [2024-11-20 12:16:38.981646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.136 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=283180 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 283180 /var/tmp/bdevperf.sock 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 283180 ']' 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:56.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:56.136 { 00:06:56.136 "params": { 00:06:56.136 "name": "Nvme$subsystem", 00:06:56.136 "trtype": "$TEST_TRANSPORT", 00:06:56.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:56.136 "adrfam": "ipv4", 00:06:56.136 "trsvcid": "$NVMF_PORT", 00:06:56.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:56.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:56.136 "hdgst": ${hdgst:-false}, 00:06:56.136 "ddgst": ${ddgst:-false} 00:06:56.136 }, 00:06:56.136 "method": "bdev_nvme_attach_controller" 00:06:56.136 } 00:06:56.136 EOF 00:06:56.136 )") 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:56.136 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:56.136 "params": { 00:06:56.136 "name": "Nvme0", 00:06:56.136 "trtype": "tcp", 00:06:56.136 "traddr": "10.0.0.2", 00:06:56.136 "adrfam": "ipv4", 00:06:56.136 "trsvcid": "4420", 00:06:56.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:56.136 "hdgst": false, 00:06:56.136 "ddgst": false 00:06:56.136 }, 00:06:56.136 "method": "bdev_nvme_attach_controller" 00:06:56.136 }' 00:06:56.136 [2024-11-20 12:16:39.078435] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:56.136 [2024-11-20 12:16:39.078479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283180 ] 00:06:56.136 [2024-11-20 12:16:39.154316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.136 [2024-11-20 12:16:39.195853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.395 Running I/O for 10 seconds... 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:56.653 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=98 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 98 -ge 100 ']' 00:06:56.654 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.914 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.914 [2024-11-20 12:16:39.884962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.914 [2024-11-20 12:16:39.885341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.914 [2024-11-20 12:16:39.885348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.915 [2024-11-20 12:16:39.885814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.915 [2024-11-20 12:16:39.885820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.885972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.916 [2024-11-20 12:16:39.885979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.916 [2024-11-20 12:16:39.886965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:56.916 task offset: 104192 on job bdev=Nvme0n1 fails 00:06:56.916 00:06:56.916 Latency(us) 00:06:56.916 [2024-11-20T11:16:40.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.916 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:56.916 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:56.916 Verification LBA range: start 0x0 length 0x400 00:06:56.916 Nvme0n1 : 0.41 1889.00 118.06 157.42 0.00 30425.89 1510.18 27582.11 00:06:56.916 [2024-11-20T11:16:40.032Z] =================================================================================================================== 00:06:56.916 [2024-11-20T11:16:40.032Z] Total : 1889.00 118.06 157.42 0.00 30425.89 1510.18 27582.11 00:06:56.916 [2024-11-20 12:16:39.889379] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.916 [2024-11-20 12:16:39.889404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d96500 (9): Bad file descriptor 00:06:56.916 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.916 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:56.916 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.916 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.916 [2024-11-20 12:16:39.896574] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:56.916 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.916 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 283180 00:06:57.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (283180) - No such process 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:57.853 { 00:06:57.853 "params": { 00:06:57.853 "name": "Nvme$subsystem", 00:06:57.853 "trtype": "$TEST_TRANSPORT", 00:06:57.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:57.853 "adrfam": "ipv4", 00:06:57.853 "trsvcid": "$NVMF_PORT", 00:06:57.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:57.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:57.853 "hdgst": ${hdgst:-false}, 00:06:57.853 "ddgst": ${ddgst:-false} 00:06:57.853 }, 00:06:57.853 "method": "bdev_nvme_attach_controller" 00:06:57.853 } 00:06:57.853 EOF 00:06:57.853 )") 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:57.853 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:57.853 "params": { 00:06:57.853 "name": "Nvme0", 00:06:57.853 "trtype": "tcp", 00:06:57.853 "traddr": "10.0.0.2", 00:06:57.853 "adrfam": "ipv4", 00:06:57.853 "trsvcid": "4420", 00:06:57.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:57.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:57.853 "hdgst": false, 00:06:57.853 "ddgst": false 00:06:57.853 }, 00:06:57.853 "method": "bdev_nvme_attach_controller" 00:06:57.853 }' 00:06:57.853 [2024-11-20 12:16:40.951989] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:06:57.853 [2024-11-20 12:16:40.952039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283553 ] 00:06:58.112 [2024-11-20 12:16:41.030189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.112 [2024-11-20 12:16:41.071643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.371 Running I/O for 1 seconds... 00:06:59.308 1946.00 IOPS, 121.62 MiB/s 00:06:59.308 Latency(us) 00:06:59.308 [2024-11-20T11:16:42.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:59.308 Verification LBA range: start 0x0 length 0x400 00:06:59.308 Nvme0n1 : 1.01 1986.92 124.18 0.00 0.00 31588.25 2336.50 27582.11 00:06:59.308 [2024-11-20T11:16:42.424Z] =================================================================================================================== 00:06:59.308 [2024-11-20T11:16:42.424Z] Total : 1986.92 124.18 0.00 0.00 31588.25 2336.50 27582.11 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.308 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.308 rmmod nvme_tcp 00:06:59.568 rmmod nvme_fabrics 00:06:59.568 rmmod nvme_keyring 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 283137 ']' 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 283137 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 283137 ']' 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 283137 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283137 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283137' 00:06:59.568 killing process with pid 283137 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 283137 00:06:59.568 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 283137 00:06:59.827 [2024-11-20 12:16:42.706115] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.827 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.828 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:59.828 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.828 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.828 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:01.735 00:07:01.735 real 0m12.495s 00:07:01.735 user 0m19.766s 00:07:01.735 sys 0m5.637s 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.735 ************************************ 00:07:01.735 END TEST nvmf_host_management 00:07:01.735 ************************************ 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.735 12:16:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:01.995 ************************************ 00:07:01.995 START TEST nvmf_lvol 00:07:01.995 ************************************ 00:07:01.995 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:01.995 * Looking for test storage... 00:07:01.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.995 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.995 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.995 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.995 --rc genhtml_branch_coverage=1 00:07:01.995 --rc genhtml_function_coverage=1 00:07:01.995 --rc genhtml_legend=1 00:07:01.995 --rc geninfo_all_blocks=1 00:07:01.995 --rc geninfo_unexecuted_blocks=1 00:07:01.995 00:07:01.995 ' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.995 --rc genhtml_branch_coverage=1 00:07:01.995 --rc genhtml_function_coverage=1 00:07:01.995 --rc genhtml_legend=1 00:07:01.995 --rc geninfo_all_blocks=1 00:07:01.995 --rc geninfo_unexecuted_blocks=1 00:07:01.995 00:07:01.995 ' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.995 --rc genhtml_branch_coverage=1 00:07:01.995 --rc genhtml_function_coverage=1 00:07:01.995 --rc genhtml_legend=1 00:07:01.995 --rc geninfo_all_blocks=1 00:07:01.995 --rc geninfo_unexecuted_blocks=1 00:07:01.995 00:07:01.995 ' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.995 --rc genhtml_branch_coverage=1 00:07:01.995 --rc genhtml_function_coverage=1 00:07:01.995 --rc genhtml_legend=1 00:07:01.995 --rc geninfo_all_blocks=1 00:07:01.995 --rc geninfo_unexecuted_blocks=1 00:07:01.995 00:07:01.995 ' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.995 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:01.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:01.996 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:08.569 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.569 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:08.569 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:08.570 Found net devices under 0000:86:00.0: cvl_0_0 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:08.570 Found net devices under 0000:86:00.1: cvl_0_1 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:08.570 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:08.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:07:08.570 00:07:08.570 --- 10.0.0.2 ping statistics --- 00:07:08.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.570 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:08.570 00:07:08.570 --- 10.0.0.1 ping statistics --- 00:07:08.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.570 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=287424 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 287424 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 287424 ']' 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.570 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:08.570 [2024-11-20 12:16:51.142329] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:08.570 [2024-11-20 12:16:51.142370] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.570 [2024-11-20 12:16:51.222316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.570 [2024-11-20 12:16:51.264776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.570 [2024-11-20 12:16:51.264814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.570 [2024-11-20 12:16:51.264822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.570 [2024-11-20 12:16:51.264827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.570 [2024-11-20 12:16:51.264833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.570 [2024-11-20 12:16:51.266225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.570 [2024-11-20 12:16:51.266330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.570 [2024-11-20 12:16:51.266331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.138 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.138 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:09.138 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.138 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.138 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.138 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.138 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:09.139 [2024-11-20 12:16:52.201655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.139 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:09.398 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:09.398 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:09.657 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:09.657 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:09.916 12:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:10.176 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1868c120-e032-4e4c-abeb-786a96e08357 00:07:10.176 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1868c120-e032-4e4c-abeb-786a96e08357 lvol 20 00:07:10.435 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=59b3f853-8c33-40de-8472-913c75b266b4 00:07:10.435 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:10.435 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 59b3f853-8c33-40de-8472-913c75b266b4 00:07:10.694 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:10.963 [2024-11-20 12:16:53.876202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.963 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.222 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=287921 00:07:11.222 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:11.222 12:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:12.159 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 59b3f853-8c33-40de-8472-913c75b266b4 MY_SNAPSHOT 00:07:12.417 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a7646c3d-dc9e-4868-847f-aa52f6495f9b 00:07:12.417 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 59b3f853-8c33-40de-8472-913c75b266b4 30 00:07:12.675 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a7646c3d-dc9e-4868-847f-aa52f6495f9b MY_CLONE 00:07:12.933 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ea0a4ed4-26ef-45be-9e44-0e773628a01e 00:07:12.933 12:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ea0a4ed4-26ef-45be-9e44-0e773628a01e 00:07:13.500 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 287921 00:07:21.624 Initializing NVMe Controllers 00:07:21.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:21.624 Controller IO queue size 128, less than required. 00:07:21.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:21.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:21.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:21.624 Initialization complete. Launching workers. 00:07:21.624 ======================================================== 00:07:21.624 Latency(us) 00:07:21.624 Device Information : IOPS MiB/s Average min max 00:07:21.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11879.70 46.41 10779.49 480.82 101114.86 00:07:21.624 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11855.30 46.31 10797.56 3013.55 41007.23 00:07:21.624 ======================================================== 00:07:21.624 Total : 23735.00 92.71 10788.52 480.82 101114.86 00:07:21.624 00:07:21.624 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.884 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 59b3f853-8c33-40de-8472-913c75b266b4 00:07:22.143 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1868c120-e032-4e4c-abeb-786a96e08357 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:22.143 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:22.143 rmmod nvme_tcp 00:07:22.143 rmmod nvme_fabrics 00:07:22.403 rmmod nvme_keyring 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 287424 ']' 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 287424 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 287424 ']' 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 287424 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287424 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287424' 00:07:22.403 killing process with pid 287424 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 287424 00:07:22.403 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 287424 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.663 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.569 00:07:24.569 real 0m22.745s 00:07:24.569 user 1m5.670s 00:07:24.569 sys 0m7.721s 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.569 ************************************ 00:07:24.569 END TEST nvmf_lvol 00:07:24.569 ************************************ 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.569 12:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.829 ************************************ 00:07:24.829 START TEST nvmf_lvs_grow 00:07:24.829 ************************************ 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:24.829 * Looking for test storage... 00:07:24.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:24.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.829 --rc genhtml_branch_coverage=1 00:07:24.829 --rc genhtml_function_coverage=1 00:07:24.829 --rc genhtml_legend=1 00:07:24.829 --rc geninfo_all_blocks=1 00:07:24.829 --rc geninfo_unexecuted_blocks=1 00:07:24.829 00:07:24.829 ' 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:24.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.829 --rc genhtml_branch_coverage=1 00:07:24.829 --rc genhtml_function_coverage=1 00:07:24.829 --rc genhtml_legend=1 00:07:24.829 --rc geninfo_all_blocks=1 00:07:24.829 --rc geninfo_unexecuted_blocks=1 00:07:24.829 00:07:24.829 ' 00:07:24.829 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:24.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.829 --rc genhtml_branch_coverage=1 00:07:24.829 --rc genhtml_function_coverage=1 00:07:24.829 --rc genhtml_legend=1 00:07:24.829 --rc geninfo_all_blocks=1 00:07:24.829 --rc geninfo_unexecuted_blocks=1 00:07:24.830 00:07:24.830 ' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:24.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.830 --rc genhtml_branch_coverage=1 00:07:24.830 --rc genhtml_function_coverage=1 00:07:24.830 --rc genhtml_legend=1 00:07:24.830 --rc geninfo_all_blocks=1 00:07:24.830 --rc geninfo_unexecuted_blocks=1 00:07:24.830 00:07:24.830 ' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.830 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.402 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:31.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:31.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:31.403 Found net devices under 0000:86:00.0: cvl_0_0 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:31.403 Found net devices under 0000:86:00.1: cvl_0_1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:07:31.403 00:07:31.403 --- 10.0.0.2 ping statistics --- 00:07:31.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.403 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:07:31.403 00:07:31.403 --- 10.0.0.1 ping statistics --- 00:07:31.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.403 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=293329 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 293329 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 293329 ']' 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.403 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.404 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.404 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.404 12:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.404 [2024-11-20 12:17:14.001893] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:31.404 [2024-11-20 12:17:14.001938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.404 [2024-11-20 12:17:14.078940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.404 [2024-11-20 12:17:14.120229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.404 [2024-11-20 12:17:14.120265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.404 [2024-11-20 12:17:14.120272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.404 [2024-11-20 12:17:14.120278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.404 [2024-11-20 12:17:14.120284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.404 [2024-11-20 12:17:14.120816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.404 [2024-11-20 12:17:14.424746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:31.404 ************************************ 00:07:31.404 START TEST lvs_grow_clean 00:07:31.404 ************************************ 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.404 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.663 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:31.663 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:31.922 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:31.922 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:31.923 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:32.181 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:32.181 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:32.181 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 lvol 150 00:07:32.181 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a983cbbe-c67e-4b50-8aa0-911f5681c700 00:07:32.181 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:32.181 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:32.440 [2024-11-20 12:17:15.425846] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:32.440 [2024-11-20 12:17:15.425896] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.440 true 00:07:32.440 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:32.440 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:32.699 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:32.699 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.699 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a983cbbe-c67e-4b50-8aa0-911f5681c700 00:07:32.959 12:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.217 [2024-11-20 12:17:16.168071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.217 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=293827 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 293827 /var/tmp/bdevperf.sock 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 293827 ']' 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.477 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:33.477 [2024-11-20 12:17:16.410760] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:33.477 [2024-11-20 12:17:16.410805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293827 ] 00:07:33.477 [2024-11-20 12:17:16.483642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.477 [2024-11-20 12:17:16.524475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.736 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.736 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:33.736 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:33.996 Nvme0n1 00:07:33.996 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:33.996 [ 00:07:33.996 { 00:07:33.996 "name": "Nvme0n1", 00:07:33.996 "aliases": [ 00:07:33.996 "a983cbbe-c67e-4b50-8aa0-911f5681c700" 00:07:33.996 ], 00:07:33.996 "product_name": "NVMe disk", 00:07:33.996 "block_size": 4096, 00:07:33.996 "num_blocks": 38912, 00:07:33.996 "uuid": "a983cbbe-c67e-4b50-8aa0-911f5681c700", 00:07:33.996 "numa_id": 1, 00:07:33.996 "assigned_rate_limits": { 00:07:33.996 "rw_ios_per_sec": 0, 00:07:33.996 "rw_mbytes_per_sec": 0, 00:07:33.996 "r_mbytes_per_sec": 0, 00:07:33.996 "w_mbytes_per_sec": 0 00:07:33.996 }, 00:07:33.996 "claimed": false, 00:07:33.996 "zoned": false, 00:07:33.996 "supported_io_types": { 00:07:33.996 "read": true, 00:07:33.996 "write": true, 00:07:33.996 "unmap": true, 00:07:33.996 "flush": true, 00:07:33.996 "reset": true, 00:07:33.996 "nvme_admin": true, 00:07:33.996 "nvme_io": true, 00:07:33.996 "nvme_io_md": false, 00:07:33.996 "write_zeroes": true, 00:07:33.996 "zcopy": false, 00:07:33.996 "get_zone_info": false, 00:07:33.996 "zone_management": false, 00:07:33.996 "zone_append": false, 00:07:33.996 "compare": true, 00:07:33.996 "compare_and_write": true, 00:07:33.996 "abort": true, 00:07:33.996 "seek_hole": false, 00:07:33.996 "seek_data": false, 00:07:33.996 "copy": true, 00:07:33.996 "nvme_iov_md": false 00:07:33.996 }, 00:07:33.996 "memory_domains": [ 00:07:33.996 { 00:07:33.996 "dma_device_id": "system", 00:07:33.996 "dma_device_type": 1 00:07:33.996 } 00:07:33.996 ], 00:07:33.996 "driver_specific": { 00:07:33.996 "nvme": [ 00:07:33.996 { 00:07:33.996 "trid": { 00:07:33.996 "trtype": "TCP", 00:07:33.996 "adrfam": "IPv4", 00:07:33.996 "traddr": "10.0.0.2", 00:07:33.996 "trsvcid": "4420", 00:07:33.996 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:33.996 }, 00:07:33.996 "ctrlr_data": { 00:07:33.996 "cntlid": 1, 00:07:33.996 "vendor_id": "0x8086", 00:07:33.996 "model_number": "SPDK bdev Controller", 00:07:33.996 "serial_number": "SPDK0", 00:07:33.996 "firmware_revision": "25.01", 00:07:33.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.996 "oacs": { 00:07:33.996 "security": 0, 00:07:33.996 "format": 0, 00:07:33.996 "firmware": 0, 00:07:33.996 "ns_manage": 0 00:07:33.996 }, 00:07:33.996 "multi_ctrlr": true, 00:07:33.996 "ana_reporting": false 00:07:33.996 }, 00:07:33.996 "vs": { 00:07:33.996 "nvme_version": "1.3" 00:07:33.996 }, 00:07:33.996 "ns_data": { 00:07:33.996 "id": 1, 00:07:33.996 "can_share": true 00:07:33.996 } 00:07:33.996 } 00:07:33.996 ], 00:07:33.996 "mp_policy": "active_passive" 00:07:33.996 } 00:07:33.996 } 00:07:33.996 ] 00:07:34.255 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=294010 00:07:34.255 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:34.255 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:34.255 Running I/O for 10 seconds... 00:07:35.190 Latency(us) 00:07:35.190 [2024-11-20T11:17:18.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.190 Nvme0n1 : 1.00 22681.00 88.60 0.00 0.00 0.00 0.00 0.00 00:07:35.190 [2024-11-20T11:17:18.306Z] =================================================================================================================== 00:07:35.190 [2024-11-20T11:17:18.306Z] Total : 22681.00 88.60 0.00 0.00 0.00 0.00 0.00 00:07:35.190 00:07:36.125 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:36.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.125 Nvme0n1 : 2.00 22772.50 88.96 0.00 0.00 0.00 0.00 0.00 00:07:36.125 [2024-11-20T11:17:19.241Z] =================================================================================================================== 00:07:36.125 [2024-11-20T11:17:19.241Z] Total : 22772.50 88.96 0.00 0.00 0.00 0.00 0.00 00:07:36.125 00:07:36.384 true 00:07:36.384 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:36.384 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:36.642 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:36.642 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:36.642 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 294010 00:07:37.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.210 Nvme0n1 : 3.00 22759.33 88.90 0.00 0.00 0.00 0.00 0.00 00:07:37.210 [2024-11-20T11:17:20.326Z] =================================================================================================================== 00:07:37.210 [2024-11-20T11:17:20.326Z] Total : 22759.33 88.90 0.00 0.00 0.00 0.00 0.00 00:07:37.210 00:07:38.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.146 Nvme0n1 : 4.00 22816.25 89.13 0.00 0.00 0.00 0.00 0.00 00:07:38.146 [2024-11-20T11:17:21.262Z] =================================================================================================================== 00:07:38.146 [2024-11-20T11:17:21.262Z] Total : 22816.25 89.13 0.00 0.00 0.00 0.00 0.00 00:07:38.146 00:07:39.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.523 Nvme0n1 : 5.00 22875.80 89.36 0.00 0.00 0.00 0.00 0.00 00:07:39.523 [2024-11-20T11:17:22.639Z] =================================================================================================================== 00:07:39.523 [2024-11-20T11:17:22.639Z] Total : 22875.80 89.36 0.00 0.00 0.00 0.00 0.00 00:07:39.523 00:07:40.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.458 Nvme0n1 : 6.00 22908.50 89.49 0.00 0.00 0.00 0.00 0.00 00:07:40.458 [2024-11-20T11:17:23.574Z] =================================================================================================================== 00:07:40.458 [2024-11-20T11:17:23.574Z] Total : 22908.50 89.49 0.00 0.00 0.00 0.00 0.00 00:07:40.458 00:07:41.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.393 Nvme0n1 : 7.00 22937.86 89.60 0.00 0.00 0.00 0.00 0.00 00:07:41.393 [2024-11-20T11:17:24.509Z] =================================================================================================================== 00:07:41.393 [2024-11-20T11:17:24.509Z] Total : 22937.86 89.60 0.00 0.00 0.00 0.00 0.00 00:07:41.393 00:07:42.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.329 Nvme0n1 : 8.00 22968.00 89.72 0.00 0.00 0.00 0.00 0.00 00:07:42.329 [2024-11-20T11:17:25.445Z] =================================================================================================================== 00:07:42.329 [2024-11-20T11:17:25.445Z] Total : 22968.00 89.72 0.00 0.00 0.00 0.00 0.00 00:07:42.329 00:07:43.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.373 Nvme0n1 : 9.00 22991.78 89.81 0.00 0.00 0.00 0.00 0.00 00:07:43.373 [2024-11-20T11:17:26.489Z] =================================================================================================================== 00:07:43.373 [2024-11-20T11:17:26.489Z] Total : 22991.78 89.81 0.00 0.00 0.00 0.00 0.00 00:07:43.373 00:07:44.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.335 Nvme0n1 : 10.00 22997.80 89.84 0.00 0.00 0.00 0.00 0.00 00:07:44.335 [2024-11-20T11:17:27.451Z] =================================================================================================================== 00:07:44.335 [2024-11-20T11:17:27.451Z] Total : 22997.80 89.84 0.00 0.00 0.00 0.00 0.00 00:07:44.335 00:07:44.335 00:07:44.335 Latency(us) 00:07:44.335 [2024-11-20T11:17:27.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.335 Nvme0n1 : 10.01 22997.55 89.83 0.00 0.00 5562.77 1517.30 10143.83 00:07:44.335 [2024-11-20T11:17:27.451Z] =================================================================================================================== 00:07:44.335 [2024-11-20T11:17:27.451Z] Total : 22997.55 89.83 0.00 0.00 5562.77 1517.30 10143.83 00:07:44.335 { 00:07:44.335 "results": [ 00:07:44.335 { 00:07:44.335 "job": "Nvme0n1", 00:07:44.335 "core_mask": "0x2", 00:07:44.335 "workload": "randwrite", 00:07:44.335 "status": "finished", 00:07:44.335 "queue_depth": 128, 00:07:44.335 "io_size": 4096, 00:07:44.335 "runtime": 10.005675, 00:07:44.335 "iops": 22997.548891004357, 00:07:44.335 "mibps": 89.83417535548577, 00:07:44.335 "io_failed": 0, 00:07:44.335 "io_timeout": 0, 00:07:44.335 "avg_latency_us": 5562.765009789439, 00:07:44.335 "min_latency_us": 1517.3008695652175, 00:07:44.335 "max_latency_us": 10143.83304347826 00:07:44.335 } 00:07:44.335 ], 00:07:44.335 "core_count": 1 00:07:44.335 } 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 293827 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 293827 ']' 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 293827 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 293827 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 293827' 00:07:44.335 killing process with pid 293827 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 293827 00:07:44.335 Received shutdown signal, test time was about 10.000000 seconds 00:07:44.335 00:07:44.335 Latency(us) 00:07:44.335 [2024-11-20T11:17:27.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.335 [2024-11-20T11:17:27.451Z] =================================================================================================================== 00:07:44.335 [2024-11-20T11:17:27.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:44.335 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 293827 00:07:44.594 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.594 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.852 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:44.852 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:45.110 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:45.110 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:45.110 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.369 [2024-11-20 12:17:28.258729] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.369 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.370 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:45.629 request: 00:07:45.629 { 00:07:45.629 "uuid": "7b4db03d-f666-41fb-8eee-469c6d2b31d8", 00:07:45.629 "method": "bdev_lvol_get_lvstores", 00:07:45.629 "req_id": 1 00:07:45.629 } 00:07:45.629 Got JSON-RPC error response 00:07:45.629 response: 00:07:45.629 { 00:07:45.629 "code": -19, 00:07:45.629 "message": "No such device" 00:07:45.629 } 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.629 aio_bdev 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a983cbbe-c67e-4b50-8aa0-911f5681c700 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a983cbbe-c67e-4b50-8aa0-911f5681c700 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.629 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.887 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a983cbbe-c67e-4b50-8aa0-911f5681c700 -t 2000 00:07:46.146 [ 00:07:46.146 { 00:07:46.146 "name": "a983cbbe-c67e-4b50-8aa0-911f5681c700", 00:07:46.146 "aliases": [ 00:07:46.146 "lvs/lvol" 00:07:46.146 ], 00:07:46.146 "product_name": "Logical Volume", 00:07:46.146 "block_size": 4096, 00:07:46.146 "num_blocks": 38912, 00:07:46.146 "uuid": "a983cbbe-c67e-4b50-8aa0-911f5681c700", 00:07:46.146 "assigned_rate_limits": { 00:07:46.146 "rw_ios_per_sec": 0, 00:07:46.146 "rw_mbytes_per_sec": 0, 00:07:46.146 "r_mbytes_per_sec": 0, 00:07:46.146 "w_mbytes_per_sec": 0 00:07:46.146 }, 00:07:46.146 "claimed": false, 00:07:46.146 "zoned": false, 00:07:46.146 "supported_io_types": { 00:07:46.146 "read": true, 00:07:46.146 "write": true, 00:07:46.146 "unmap": true, 00:07:46.146 "flush": false, 00:07:46.146 "reset": true, 00:07:46.146 "nvme_admin": false, 00:07:46.146 "nvme_io": false, 00:07:46.146 "nvme_io_md": false, 00:07:46.146 "write_zeroes": true, 00:07:46.146 "zcopy": false, 00:07:46.146 "get_zone_info": false, 00:07:46.146 "zone_management": false, 00:07:46.146 "zone_append": false, 00:07:46.146 "compare": false, 00:07:46.146 "compare_and_write": false, 00:07:46.146 "abort": false, 00:07:46.146 "seek_hole": true, 00:07:46.146 "seek_data": true, 00:07:46.146 "copy": false, 00:07:46.146 "nvme_iov_md": false 00:07:46.146 }, 00:07:46.146 "driver_specific": { 00:07:46.146 "lvol": { 00:07:46.146 "lvol_store_uuid": "7b4db03d-f666-41fb-8eee-469c6d2b31d8", 00:07:46.146 "base_bdev": "aio_bdev", 00:07:46.146 "thin_provision": false, 00:07:46.146 "num_allocated_clusters": 38, 00:07:46.146 "snapshot": false, 00:07:46.146 "clone": false, 00:07:46.146 "esnap_clone": false 00:07:46.146 } 00:07:46.146 } 00:07:46.146 } 00:07:46.146 ] 00:07:46.146 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:46.146 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:46.146 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:46.146 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:46.146 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:46.146 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:46.404 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:46.404 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a983cbbe-c67e-4b50-8aa0-911f5681c700 00:07:46.663 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7b4db03d-f666-41fb-8eee-469c6d2b31d8 00:07:46.922 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.922 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.181 00:07:47.181 real 0m15.599s 00:07:47.181 user 0m15.141s 00:07:47.181 sys 0m1.496s 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.181 ************************************ 00:07:47.181 END TEST lvs_grow_clean 00:07:47.181 ************************************ 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.181 ************************************ 00:07:47.181 START TEST lvs_grow_dirty 00:07:47.181 ************************************ 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.181 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.440 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.441 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.441 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f39ed203-71ae-40da-b45a-620762480fb8 00:07:47.441 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:07:47.441 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:47.699 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:47.699 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:47.699 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f39ed203-71ae-40da-b45a-620762480fb8 lvol 150 00:07:47.958 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:07:47.958 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.958 12:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.216 [2024-11-20 12:17:31.148053] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.216 [2024-11-20 12:17:31.148102] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.216 true 00:07:48.216 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:07:48.216 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:48.475 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:48.475 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.475 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:07:48.733 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.992 [2024-11-20 12:17:31.894260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.992 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=296467 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 296467 /var/tmp/bdevperf.sock 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 296467 ']' 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.992 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.251 [2024-11-20 12:17:32.143961] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:07:49.251 [2024-11-20 12:17:32.144011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296467 ] 00:07:49.251 [2024-11-20 12:17:32.219217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.251 [2024-11-20 12:17:32.259895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.251 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.251 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:49.251 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.820 Nvme0n1 00:07:49.820 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.080 [ 00:07:50.080 { 00:07:50.080 "name": "Nvme0n1", 00:07:50.080 "aliases": [ 00:07:50.080 "06c33b2f-60b4-41f8-ab0f-d15cf4b40c17" 00:07:50.080 ], 00:07:50.080 "product_name": "NVMe disk", 00:07:50.080 "block_size": 4096, 00:07:50.080 "num_blocks": 38912, 00:07:50.080 "uuid": "06c33b2f-60b4-41f8-ab0f-d15cf4b40c17", 00:07:50.080 "numa_id": 1, 00:07:50.080 "assigned_rate_limits": { 00:07:50.080 "rw_ios_per_sec": 0, 00:07:50.080 "rw_mbytes_per_sec": 0, 00:07:50.080 "r_mbytes_per_sec": 0, 00:07:50.080 "w_mbytes_per_sec": 0 00:07:50.080 }, 00:07:50.080 "claimed": false, 00:07:50.080 "zoned": false, 00:07:50.080 "supported_io_types": { 00:07:50.080 "read": true, 00:07:50.080 "write": true, 00:07:50.080 "unmap": true, 00:07:50.080 "flush": true, 00:07:50.080 "reset": true, 00:07:50.080 "nvme_admin": true, 00:07:50.080 "nvme_io": true, 00:07:50.080 "nvme_io_md": false, 00:07:50.080 "write_zeroes": true, 00:07:50.080 "zcopy": false, 00:07:50.080 "get_zone_info": false, 00:07:50.080 "zone_management": false, 00:07:50.080 "zone_append": false, 00:07:50.080 "compare": true, 00:07:50.080 "compare_and_write": true, 00:07:50.080 "abort": true, 00:07:50.080 "seek_hole": false, 00:07:50.080 "seek_data": false, 00:07:50.080 "copy": true, 00:07:50.080 "nvme_iov_md": false 00:07:50.080 }, 00:07:50.080 "memory_domains": [ 00:07:50.080 { 00:07:50.080 "dma_device_id": "system", 00:07:50.080 "dma_device_type": 1 00:07:50.080 } 00:07:50.080 ], 00:07:50.080 "driver_specific": { 00:07:50.080 "nvme": [ 00:07:50.080 { 00:07:50.080 "trid": { 00:07:50.080 "trtype": "TCP", 00:07:50.080 "adrfam": "IPv4", 00:07:50.080 "traddr": "10.0.0.2", 00:07:50.080 "trsvcid": "4420", 00:07:50.080 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.080 }, 00:07:50.080 "ctrlr_data": { 00:07:50.080 "cntlid": 1, 00:07:50.080 "vendor_id": "0x8086", 00:07:50.080 "model_number": "SPDK bdev Controller", 00:07:50.080 "serial_number": "SPDK0", 00:07:50.080 "firmware_revision": "25.01", 00:07:50.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.080 "oacs": { 00:07:50.080 "security": 0, 00:07:50.080 "format": 0, 00:07:50.080 "firmware": 0, 00:07:50.080 "ns_manage": 0 00:07:50.080 }, 00:07:50.080 "multi_ctrlr": true, 00:07:50.080 "ana_reporting": false 00:07:50.080 }, 00:07:50.080 "vs": { 00:07:50.080 "nvme_version": "1.3" 00:07:50.080 }, 00:07:50.080 "ns_data": { 00:07:50.080 "id": 1, 00:07:50.080 "can_share": true 00:07:50.080 } 00:07:50.080 } 00:07:50.080 ], 00:07:50.080 "mp_policy": "active_passive" 00:07:50.080 } 00:07:50.080 } 00:07:50.080 ] 00:07:50.080 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=296662 00:07:50.080 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:50.080 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.080 Running I/O for 10 seconds... 00:07:51.015 Latency(us) 00:07:51.015 [2024-11-20T11:17:34.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.015 Nvme0n1 : 1.00 22655.00 88.50 0.00 0.00 0.00 0.00 0.00 00:07:51.015 [2024-11-20T11:17:34.131Z] =================================================================================================================== 00:07:51.015 [2024-11-20T11:17:34.131Z] Total : 22655.00 88.50 0.00 0.00 0.00 0.00 0.00 00:07:51.015 00:07:51.950 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f39ed203-71ae-40da-b45a-620762480fb8 00:07:52.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.207 Nvme0n1 : 2.00 22765.00 88.93 0.00 0.00 0.00 0.00 0.00 00:07:52.207 [2024-11-20T11:17:35.323Z] =================================================================================================================== 00:07:52.207 [2024-11-20T11:17:35.323Z] Total : 22765.00 88.93 0.00 0.00 0.00 0.00 0.00 00:07:52.207 00:07:52.207 true 00:07:52.207 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:07:52.207 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.466 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.466 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.466 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 296662 00:07:53.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.031 Nvme0n1 : 3.00 22784.00 89.00 0.00 0.00 0.00 0.00 0.00 00:07:53.031 [2024-11-20T11:17:36.147Z] =================================================================================================================== 00:07:53.031 [2024-11-20T11:17:36.147Z] Total : 22784.00 89.00 0.00 0.00 0.00 0.00 0.00 00:07:53.031 00:07:53.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.967 Nvme0n1 : 4.00 22842.50 89.23 0.00 0.00 0.00 0.00 0.00 00:07:53.967 [2024-11-20T11:17:37.083Z] =================================================================================================================== 00:07:53.967 [2024-11-20T11:17:37.083Z] Total : 22842.50 89.23 0.00 0.00 0.00 0.00 0.00 00:07:53.967 00:07:55.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.342 Nvme0n1 : 5.00 22885.00 89.39 0.00 0.00 0.00 0.00 0.00 00:07:55.342 [2024-11-20T11:17:38.458Z] =================================================================================================================== 00:07:55.342 [2024-11-20T11:17:38.458Z] Total : 22885.00 89.39 0.00 0.00 0.00 0.00 0.00 00:07:55.342 00:07:56.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.277 Nvme0n1 : 6.00 22903.67 89.47 0.00 0.00 0.00 0.00 0.00 00:07:56.277 [2024-11-20T11:17:39.393Z] =================================================================================================================== 00:07:56.277 [2024-11-20T11:17:39.393Z] Total : 22903.67 89.47 0.00 0.00 0.00 0.00 0.00 00:07:56.277 00:07:57.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.212 Nvme0n1 : 7.00 22929.00 89.57 0.00 0.00 0.00 0.00 0.00 00:07:57.212 [2024-11-20T11:17:40.328Z] =================================================================================================================== 00:07:57.212 [2024-11-20T11:17:40.328Z] Total : 22929.00 89.57 0.00 0.00 0.00 0.00 0.00 00:07:57.212 00:07:58.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.146 Nvme0n1 : 8.00 22938.75 89.60 0.00 0.00 0.00 0.00 0.00 00:07:58.146 [2024-11-20T11:17:41.262Z] =================================================================================================================== 00:07:58.146 [2024-11-20T11:17:41.262Z] Total : 22938.75 89.60 0.00 0.00 0.00 0.00 0.00 00:07:58.146 00:07:59.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.080 Nvme0n1 : 9.00 22957.00 89.68 0.00 0.00 0.00 0.00 0.00 00:07:59.080 [2024-11-20T11:17:42.196Z] =================================================================================================================== 00:07:59.080 [2024-11-20T11:17:42.196Z] Total : 22957.00 89.68 0.00 0.00 0.00 0.00 0.00 00:07:59.080 00:08:00.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.016 Nvme0n1 : 10.00 22980.30 89.77 0.00 0.00 0.00 0.00 0.00 00:08:00.016 [2024-11-20T11:17:43.132Z] =================================================================================================================== 00:08:00.016 [2024-11-20T11:17:43.132Z] Total : 22980.30 89.77 0.00 0.00 0.00 0.00 0.00 00:08:00.016 00:08:00.016 00:08:00.016 Latency(us) 00:08:00.016 [2024-11-20T11:17:43.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.016 Nvme0n1 : 10.01 22980.87 89.77 0.00 0.00 5566.16 2749.66 10884.67 00:08:00.016 [2024-11-20T11:17:43.132Z] =================================================================================================================== 00:08:00.016 [2024-11-20T11:17:43.132Z] Total : 22980.87 89.77 0.00 0.00 5566.16 2749.66 10884.67 00:08:00.016 { 00:08:00.016 "results": [ 00:08:00.016 { 00:08:00.016 "job": "Nvme0n1", 00:08:00.016 "core_mask": "0x2", 00:08:00.016 "workload": "randwrite", 00:08:00.016 "status": "finished", 00:08:00.016 "queue_depth": 128, 00:08:00.016 "io_size": 4096, 00:08:00.016 "runtime": 10.00532, 00:08:00.016 "iops": 22980.874174938934, 00:08:00.016 "mibps": 89.76903974585521, 00:08:00.016 "io_failed": 0, 00:08:00.016 "io_timeout": 0, 00:08:00.016 "avg_latency_us": 5566.164827141904, 00:08:00.016 "min_latency_us": 2749.662608695652, 00:08:00.016 "max_latency_us": 10884.674782608696 00:08:00.016 } 00:08:00.016 ], 00:08:00.016 "core_count": 1 00:08:00.016 } 00:08:00.016 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 296467 00:08:00.016 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 296467 ']' 00:08:00.016 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 296467 00:08:00.016 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:00.016 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.016 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296467 00:08:00.275 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:00.275 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:00.275 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296467' 00:08:00.275 killing process with pid 296467 00:08:00.275 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 296467 00:08:00.275 Received shutdown signal, test time was about 10.000000 seconds 00:08:00.275 00:08:00.275 Latency(us) 00:08:00.275 [2024-11-20T11:17:43.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.275 [2024-11-20T11:17:43.391Z] =================================================================================================================== 00:08:00.275 [2024-11-20T11:17:43.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:00.275 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 296467 00:08:00.275 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.533 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.790 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:00.790 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:00.790 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:00.790 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:00.790 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 293329 00:08:00.790 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 293329 00:08:01.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 293329 Killed "${NVMF_APP[@]}" "$@" 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=298517 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 298517 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 298517 ']' 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.048 12:17:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.048 [2024-11-20 12:17:43.998052] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:01.048 [2024-11-20 12:17:43.998101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.048 [2024-11-20 12:17:44.077648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.048 [2024-11-20 12:17:44.118301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.048 [2024-11-20 12:17:44.118338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.048 [2024-11-20 12:17:44.118345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.048 [2024-11-20 12:17:44.118351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.048 [2024-11-20 12:17:44.118356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.048 [2024-11-20 12:17:44.118893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.307 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.307 [2024-11-20 12:17:44.413068] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:01.307 [2024-11-20 12:17:44.413147] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:01.307 [2024-11-20 12:17:44.413172] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.567 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 -t 2000 00:08:01.826 [ 00:08:01.826 { 00:08:01.826 "name": "06c33b2f-60b4-41f8-ab0f-d15cf4b40c17", 00:08:01.826 "aliases": [ 00:08:01.826 "lvs/lvol" 00:08:01.826 ], 00:08:01.826 "product_name": "Logical Volume", 00:08:01.826 "block_size": 4096, 00:08:01.826 "num_blocks": 38912, 00:08:01.826 "uuid": "06c33b2f-60b4-41f8-ab0f-d15cf4b40c17", 00:08:01.826 "assigned_rate_limits": { 00:08:01.826 "rw_ios_per_sec": 0, 00:08:01.826 "rw_mbytes_per_sec": 0, 00:08:01.826 "r_mbytes_per_sec": 0, 00:08:01.826 "w_mbytes_per_sec": 0 00:08:01.826 }, 00:08:01.826 "claimed": false, 00:08:01.826 "zoned": false, 00:08:01.826 "supported_io_types": { 00:08:01.826 "read": true, 00:08:01.826 "write": true, 00:08:01.827 "unmap": true, 00:08:01.827 "flush": false, 00:08:01.827 "reset": true, 00:08:01.827 "nvme_admin": false, 00:08:01.827 "nvme_io": false, 00:08:01.827 "nvme_io_md": false, 00:08:01.827 "write_zeroes": true, 00:08:01.827 "zcopy": false, 00:08:01.827 "get_zone_info": false, 00:08:01.827 "zone_management": false, 00:08:01.827 "zone_append": false, 00:08:01.827 "compare": false, 00:08:01.827 "compare_and_write": false, 00:08:01.827 "abort": false, 00:08:01.827 "seek_hole": true, 00:08:01.827 "seek_data": true, 00:08:01.827 "copy": false, 00:08:01.827 "nvme_iov_md": false 00:08:01.827 }, 00:08:01.827 "driver_specific": { 00:08:01.827 "lvol": { 00:08:01.827 "lvol_store_uuid": "f39ed203-71ae-40da-b45a-620762480fb8", 00:08:01.827 "base_bdev": "aio_bdev", 00:08:01.827 "thin_provision": false, 00:08:01.827 "num_allocated_clusters": 38, 00:08:01.827 "snapshot": false, 00:08:01.827 "clone": false, 00:08:01.827 "esnap_clone": false 00:08:01.827 } 00:08:01.827 } 00:08:01.827 } 00:08:01.827 ] 00:08:01.827 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:01.827 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:01.827 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:02.085 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:02.085 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:02.085 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:02.085 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:02.085 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.343 [2024-11-20 12:17:45.369974] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:02.343 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:02.343 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:02.343 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:02.344 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:02.602 request: 00:08:02.603 { 00:08:02.603 "uuid": "f39ed203-71ae-40da-b45a-620762480fb8", 00:08:02.603 "method": "bdev_lvol_get_lvstores", 00:08:02.603 "req_id": 1 00:08:02.603 } 00:08:02.603 Got JSON-RPC error response 00:08:02.603 response: 00:08:02.603 { 00:08:02.603 "code": -19, 00:08:02.603 "message": "No such device" 00:08:02.603 } 00:08:02.603 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:02.603 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.603 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.603 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.603 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.862 aio_bdev 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.862 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 -t 2000 00:08:03.121 [ 00:08:03.121 { 00:08:03.121 "name": "06c33b2f-60b4-41f8-ab0f-d15cf4b40c17", 00:08:03.121 "aliases": [ 00:08:03.121 "lvs/lvol" 00:08:03.121 ], 00:08:03.121 "product_name": "Logical Volume", 00:08:03.121 "block_size": 4096, 00:08:03.121 "num_blocks": 38912, 00:08:03.121 "uuid": "06c33b2f-60b4-41f8-ab0f-d15cf4b40c17", 00:08:03.121 "assigned_rate_limits": { 00:08:03.121 "rw_ios_per_sec": 0, 00:08:03.121 "rw_mbytes_per_sec": 0, 00:08:03.121 "r_mbytes_per_sec": 0, 00:08:03.121 "w_mbytes_per_sec": 0 00:08:03.121 }, 00:08:03.121 "claimed": false, 00:08:03.121 "zoned": false, 00:08:03.121 "supported_io_types": { 00:08:03.121 "read": true, 00:08:03.121 "write": true, 00:08:03.121 "unmap": true, 00:08:03.121 "flush": false, 00:08:03.121 "reset": true, 00:08:03.121 "nvme_admin": false, 00:08:03.121 "nvme_io": false, 00:08:03.121 "nvme_io_md": false, 00:08:03.121 "write_zeroes": true, 00:08:03.121 "zcopy": false, 00:08:03.121 "get_zone_info": false, 00:08:03.121 "zone_management": false, 00:08:03.121 "zone_append": false, 00:08:03.121 "compare": false, 00:08:03.121 "compare_and_write": false, 00:08:03.121 "abort": false, 00:08:03.121 "seek_hole": true, 00:08:03.121 "seek_data": true, 00:08:03.121 "copy": false, 00:08:03.121 "nvme_iov_md": false 00:08:03.121 }, 00:08:03.121 "driver_specific": { 00:08:03.121 "lvol": { 00:08:03.121 "lvol_store_uuid": "f39ed203-71ae-40da-b45a-620762480fb8", 00:08:03.121 "base_bdev": "aio_bdev", 00:08:03.121 "thin_provision": false, 00:08:03.121 "num_allocated_clusters": 38, 00:08:03.121 "snapshot": false, 00:08:03.121 "clone": false, 00:08:03.121 "esnap_clone": false 00:08:03.121 } 00:08:03.121 } 00:08:03.121 } 00:08:03.121 ] 00:08:03.121 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:03.121 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:03.121 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:03.383 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:03.383 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:03.383 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:03.642 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:03.643 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 06c33b2f-60b4-41f8-ab0f-d15cf4b40c17 00:08:03.643 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f39ed203-71ae-40da-b45a-620762480fb8 00:08:03.901 12:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:04.161 00:08:04.161 real 0m16.993s 00:08:04.161 user 0m43.855s 00:08:04.161 sys 0m3.881s 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.161 ************************************ 00:08:04.161 END TEST lvs_grow_dirty 00:08:04.161 ************************************ 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:04.161 nvmf_trace.0 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.161 rmmod nvme_tcp 00:08:04.161 rmmod nvme_fabrics 00:08:04.161 rmmod nvme_keyring 00:08:04.161 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 298517 ']' 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 298517 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 298517 ']' 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 298517 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298517 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298517' 00:08:04.421 killing process with pid 298517 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 298517 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 298517 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.421 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.958 00:08:06.958 real 0m41.878s 00:08:06.958 user 1m4.620s 00:08:06.958 sys 0m10.330s 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.958 ************************************ 00:08:06.958 END TEST nvmf_lvs_grow 00:08:06.958 ************************************ 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.958 ************************************ 00:08:06.958 START TEST nvmf_bdev_io_wait 00:08:06.958 ************************************ 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.958 * Looking for test storage... 00:08:06.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.958 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.959 --rc genhtml_branch_coverage=1 00:08:06.959 --rc genhtml_function_coverage=1 00:08:06.959 --rc genhtml_legend=1 00:08:06.959 --rc geninfo_all_blocks=1 00:08:06.959 --rc geninfo_unexecuted_blocks=1 00:08:06.959 00:08:06.959 ' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.959 --rc genhtml_branch_coverage=1 00:08:06.959 --rc genhtml_function_coverage=1 00:08:06.959 --rc genhtml_legend=1 00:08:06.959 --rc geninfo_all_blocks=1 00:08:06.959 --rc geninfo_unexecuted_blocks=1 00:08:06.959 00:08:06.959 ' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.959 --rc genhtml_branch_coverage=1 00:08:06.959 --rc genhtml_function_coverage=1 00:08:06.959 --rc genhtml_legend=1 00:08:06.959 --rc geninfo_all_blocks=1 00:08:06.959 --rc geninfo_unexecuted_blocks=1 00:08:06.959 00:08:06.959 ' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.959 --rc genhtml_branch_coverage=1 00:08:06.959 --rc genhtml_function_coverage=1 00:08:06.959 --rc genhtml_legend=1 00:08:06.959 --rc geninfo_all_blocks=1 00:08:06.959 --rc geninfo_unexecuted_blocks=1 00:08:06.959 00:08:06.959 ' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.959 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.960 12:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:13.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:13.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:13.532 Found net devices under 0000:86:00.0: cvl_0_0 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:13.532 Found net devices under 0000:86:00.1: cvl_0_1 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:13.532 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:13.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:08:13.533 00:08:13.533 --- 10.0.0.2 ping statistics --- 00:08:13.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.533 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:08:13.533 00:08:13.533 --- 10.0.0.1 ping statistics --- 00:08:13.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.533 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=302720 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 302720 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 302720 ']' 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.533 12:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.533 [2024-11-20 12:17:55.932734] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:13.533 [2024-11-20 12:17:55.932782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.533 [2024-11-20 12:17:56.014401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.533 [2024-11-20 12:17:56.056801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.533 [2024-11-20 12:17:56.056840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.533 [2024-11-20 12:17:56.056848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.533 [2024-11-20 12:17:56.056854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.533 [2024-11-20 12:17:56.056859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.533 [2024-11-20 12:17:56.058415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.533 [2024-11-20 12:17:56.058529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.533 [2024-11-20 12:17:56.058641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.533 [2024-11-20 12:17:56.058642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.533 [2024-11-20 12:17:56.207246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.533 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.534 Malloc0 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.534 [2024-11-20 12:17:56.262921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=302824 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=302826 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.534 { 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme$subsystem", 00:08:13.534 "trtype": "$TEST_TRANSPORT", 00:08:13.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "$NVMF_PORT", 00:08:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.534 "hdgst": ${hdgst:-false}, 00:08:13.534 "ddgst": ${ddgst:-false} 00:08:13.534 }, 00:08:13.534 "method": "bdev_nvme_attach_controller" 00:08:13.534 } 00:08:13.534 EOF 00:08:13.534 )") 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=302828 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.534 { 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme$subsystem", 00:08:13.534 "trtype": "$TEST_TRANSPORT", 00:08:13.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "$NVMF_PORT", 00:08:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.534 "hdgst": ${hdgst:-false}, 00:08:13.534 "ddgst": ${ddgst:-false} 00:08:13.534 }, 00:08:13.534 "method": "bdev_nvme_attach_controller" 00:08:13.534 } 00:08:13.534 EOF 00:08:13.534 )") 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=302831 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.534 { 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme$subsystem", 00:08:13.534 "trtype": "$TEST_TRANSPORT", 00:08:13.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "$NVMF_PORT", 00:08:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.534 "hdgst": ${hdgst:-false}, 00:08:13.534 "ddgst": ${ddgst:-false} 00:08:13.534 }, 00:08:13.534 "method": "bdev_nvme_attach_controller" 00:08:13.534 } 00:08:13.534 EOF 00:08:13.534 )") 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.534 { 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme$subsystem", 00:08:13.534 "trtype": "$TEST_TRANSPORT", 00:08:13.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "$NVMF_PORT", 00:08:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.534 "hdgst": ${hdgst:-false}, 00:08:13.534 "ddgst": ${ddgst:-false} 00:08:13.534 }, 00:08:13.534 "method": "bdev_nvme_attach_controller" 00:08:13.534 } 00:08:13.534 EOF 00:08:13.534 )") 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 302824 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme1", 00:08:13.534 "trtype": "tcp", 00:08:13.534 "traddr": "10.0.0.2", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "4420", 00:08:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.534 "hdgst": false, 00:08:13.534 "ddgst": false 00:08:13.534 }, 00:08:13.534 "method": "bdev_nvme_attach_controller" 00:08:13.534 }' 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme1", 00:08:13.534 "trtype": "tcp", 00:08:13.534 "traddr": "10.0.0.2", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "4420", 00:08:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.534 "hdgst": false, 00:08:13.534 "ddgst": false 00:08:13.534 }, 00:08:13.534 "method": "bdev_nvme_attach_controller" 00:08:13.534 }' 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:13.534 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.534 "params": { 00:08:13.534 "name": "Nvme1", 00:08:13.534 "trtype": "tcp", 00:08:13.534 "traddr": "10.0.0.2", 00:08:13.534 "adrfam": "ipv4", 00:08:13.534 "trsvcid": "4420", 00:08:13.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.535 "hdgst": false, 00:08:13.535 "ddgst": false 00:08:13.535 }, 00:08:13.535 "method": "bdev_nvme_attach_controller" 00:08:13.535 }' 00:08:13.535 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:13.535 12:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.535 "params": { 00:08:13.535 "name": "Nvme1", 00:08:13.535 "trtype": "tcp", 00:08:13.535 "traddr": "10.0.0.2", 00:08:13.535 "adrfam": "ipv4", 00:08:13.535 "trsvcid": "4420", 00:08:13.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:13.535 "hdgst": false, 00:08:13.535 "ddgst": false 00:08:13.535 }, 00:08:13.535 "method": "bdev_nvme_attach_controller" 00:08:13.535 }' 00:08:13.535 [2024-11-20 12:17:56.315656] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:13.535 [2024-11-20 12:17:56.315705] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:13.535 [2024-11-20 12:17:56.316433] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:13.535 [2024-11-20 12:17:56.316476] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:13.535 [2024-11-20 12:17:56.317436] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:13.535 [2024-11-20 12:17:56.317436] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:13.535 [2024-11-20 12:17:56.317483] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 12:17:56.317483] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:13.535 --proc-type=auto ] 00:08:13.535 [2024-11-20 12:17:56.509227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.535 [2024-11-20 12:17:56.552267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.535 [2024-11-20 12:17:56.601218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.535 [2024-11-20 12:17:56.644459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:13.794 [2024-11-20 12:17:56.702652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.794 [2024-11-20 12:17:56.756940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:13.794 [2024-11-20 12:17:56.759586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.794 [2024-11-20 12:17:56.802466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:13.794 Running I/O for 1 seconds... 00:08:13.794 Running I/O for 1 seconds... 00:08:14.051 Running I/O for 1 seconds... 00:08:14.051 Running I/O for 1 seconds... 00:08:14.985 13009.00 IOPS, 50.82 MiB/s [2024-11-20T11:17:58.101Z] 9463.00 IOPS, 36.96 MiB/s 00:08:14.985 Latency(us) 00:08:14.985 [2024-11-20T11:17:58.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.985 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:14.985 Nvme1n1 : 1.01 13069.99 51.05 0.00 0.00 9764.07 4587.52 20173.69 00:08:14.985 [2024-11-20T11:17:58.101Z] =================================================================================================================== 00:08:14.985 [2024-11-20T11:17:58.101Z] Total : 13069.99 51.05 0.00 0.00 9764.07 4587.52 20173.69 00:08:14.985 00:08:14.985 Latency(us) 00:08:14.985 [2024-11-20T11:17:58.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.985 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:14.985 Nvme1n1 : 1.01 9512.42 37.16 0.00 0.00 13397.83 7351.43 22453.20 00:08:14.985 [2024-11-20T11:17:58.101Z] =================================================================================================================== 00:08:14.985 [2024-11-20T11:17:58.101Z] Total : 9512.42 37.16 0.00 0.00 13397.83 7351.43 22453.20 00:08:14.985 10374.00 IOPS, 40.52 MiB/s 00:08:14.985 Latency(us) 00:08:14.985 [2024-11-20T11:17:58.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.985 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:14.985 Nvme1n1 : 1.01 10460.82 40.86 0.00 0.00 12205.10 3604.48 25530.55 00:08:14.985 [2024-11-20T11:17:58.101Z] =================================================================================================================== 00:08:14.985 [2024-11-20T11:17:58.101Z] Total : 10460.82 40.86 0.00 0.00 12205.10 3604.48 25530.55 00:08:14.985 238696.00 IOPS, 932.41 MiB/s 00:08:14.985 Latency(us) 00:08:14.985 [2024-11-20T11:17:58.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.985 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:14.985 Nvme1n1 : 1.00 238294.85 930.84 0.00 0.00 534.76 249.32 1681.14 00:08:14.985 [2024-11-20T11:17:58.101Z] =================================================================================================================== 00:08:14.985 [2024-11-20T11:17:58.101Z] Total : 238294.85 930.84 0.00 0.00 534.76 249.32 1681.14 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 302826 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 302828 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 302831 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.243 rmmod nvme_tcp 00:08:15.243 rmmod nvme_fabrics 00:08:15.243 rmmod nvme_keyring 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 302720 ']' 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 302720 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 302720 ']' 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 302720 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302720 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302720' 00:08:15.243 killing process with pid 302720 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 302720 00:08:15.243 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 302720 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.503 12:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.039 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.039 00:08:18.039 real 0m10.923s 00:08:18.040 user 0m16.645s 00:08:18.040 sys 0m6.200s 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.040 ************************************ 00:08:18.040 END TEST nvmf_bdev_io_wait 00:08:18.040 ************************************ 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.040 ************************************ 00:08:18.040 START TEST nvmf_queue_depth 00:08:18.040 ************************************ 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:18.040 * Looking for test storage... 00:08:18.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.040 --rc genhtml_branch_coverage=1 00:08:18.040 --rc genhtml_function_coverage=1 00:08:18.040 --rc genhtml_legend=1 00:08:18.040 --rc geninfo_all_blocks=1 00:08:18.040 --rc geninfo_unexecuted_blocks=1 00:08:18.040 00:08:18.040 ' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.040 --rc genhtml_branch_coverage=1 00:08:18.040 --rc genhtml_function_coverage=1 00:08:18.040 --rc genhtml_legend=1 00:08:18.040 --rc geninfo_all_blocks=1 00:08:18.040 --rc geninfo_unexecuted_blocks=1 00:08:18.040 00:08:18.040 ' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.040 --rc genhtml_branch_coverage=1 00:08:18.040 --rc genhtml_function_coverage=1 00:08:18.040 --rc genhtml_legend=1 00:08:18.040 --rc geninfo_all_blocks=1 00:08:18.040 --rc geninfo_unexecuted_blocks=1 00:08:18.040 00:08:18.040 ' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.040 --rc genhtml_branch_coverage=1 00:08:18.040 --rc genhtml_function_coverage=1 00:08:18.040 --rc genhtml_legend=1 00:08:18.040 --rc geninfo_all_blocks=1 00:08:18.040 --rc geninfo_unexecuted_blocks=1 00:08:18.040 00:08:18.040 ' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:18.040 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.041 12:18:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:24.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:24.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:24.613 Found net devices under 0000:86:00.0: cvl_0_0 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:24.613 Found net devices under 0000:86:00.1: cvl_0_1 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.613 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:24.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:08:24.614 00:08:24.614 --- 10.0.0.2 ping statistics --- 00:08:24.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.614 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:08:24.614 00:08:24.614 --- 10.0.0.1 ping statistics --- 00:08:24.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.614 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=306749 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 306749 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 306749 ']' 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.614 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 [2024-11-20 12:18:06.935295] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:24.614 [2024-11-20 12:18:06.935346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.614 [2024-11-20 12:18:07.019957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.614 [2024-11-20 12:18:07.061400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.614 [2024-11-20 12:18:07.061440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.614 [2024-11-20 12:18:07.061447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.614 [2024-11-20 12:18:07.061453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.614 [2024-11-20 12:18:07.061459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.614 [2024-11-20 12:18:07.062027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 [2024-11-20 12:18:07.198457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 Malloc0 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 [2024-11-20 12:18:07.249015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=306980 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 306980 /var/tmp/bdevperf.sock 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 306980 ']' 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 [2024-11-20 12:18:07.300686] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:24.614 [2024-11-20 12:18:07.300727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306980 ] 00:08:24.614 [2024-11-20 12:18:07.376118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.614 [2024-11-20 12:18:07.418777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.614 NVMe0n1 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.614 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.873 Running I/O for 10 seconds... 00:08:26.744 11328.00 IOPS, 44.25 MiB/s [2024-11-20T11:18:11.238Z] 11779.00 IOPS, 46.01 MiB/s [2024-11-20T11:18:12.173Z] 11937.00 IOPS, 46.63 MiB/s [2024-11-20T11:18:13.109Z] 12021.50 IOPS, 46.96 MiB/s [2024-11-20T11:18:14.044Z] 12050.20 IOPS, 47.07 MiB/s [2024-11-20T11:18:14.978Z] 12040.83 IOPS, 47.03 MiB/s [2024-11-20T11:18:15.913Z] 12110.29 IOPS, 47.31 MiB/s [2024-11-20T11:18:16.895Z] 12143.88 IOPS, 47.44 MiB/s [2024-11-20T11:18:17.931Z] 12161.67 IOPS, 47.51 MiB/s [2024-11-20T11:18:17.931Z] 12171.90 IOPS, 47.55 MiB/s 00:08:34.815 Latency(us) 00:08:34.815 [2024-11-20T11:18:17.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.815 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:34.815 Verification LBA range: start 0x0 length 0x4000 00:08:34.815 NVMe0n1 : 10.06 12198.46 47.65 0.00 0.00 83681.66 19261.89 54024.46 00:08:34.815 [2024-11-20T11:18:17.931Z] =================================================================================================================== 00:08:34.815 [2024-11-20T11:18:17.931Z] Total : 12198.46 47.65 0.00 0.00 83681.66 19261.89 54024.46 00:08:34.815 { 00:08:34.815 "results": [ 00:08:34.815 { 00:08:34.815 "job": "NVMe0n1", 00:08:34.815 "core_mask": "0x1", 00:08:34.815 "workload": "verify", 00:08:34.815 "status": "finished", 00:08:34.815 "verify_range": { 00:08:34.815 "start": 0, 00:08:34.815 "length": 16384 00:08:34.815 }, 00:08:34.815 "queue_depth": 1024, 00:08:34.815 "io_size": 4096, 00:08:34.815 "runtime": 10.062172, 00:08:34.815 "iops": 12198.459736128541, 00:08:34.815 "mibps": 47.650233344252115, 00:08:34.815 "io_failed": 0, 00:08:34.815 "io_timeout": 0, 00:08:34.815 "avg_latency_us": 83681.66287614737, 00:08:34.815 "min_latency_us": 19261.885217391304, 00:08:34.815 "max_latency_us": 54024.459130434785 00:08:34.815 } 00:08:34.815 ], 00:08:34.815 "core_count": 1 00:08:34.815 } 00:08:34.815 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 306980 00:08:34.815 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 306980 ']' 00:08:34.815 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 306980 00:08:34.815 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:34.815 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.815 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306980 00:08:35.074 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.074 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.074 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306980' 00:08:35.074 killing process with pid 306980 00:08:35.074 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 306980 00:08:35.074 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.074 00:08:35.074 Latency(us) 00:08:35.074 [2024-11-20T11:18:18.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.074 [2024-11-20T11:18:18.190Z] =================================================================================================================== 00:08:35.074 [2024-11-20T11:18:18.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.074 12:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 306980 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.074 rmmod nvme_tcp 00:08:35.074 rmmod nvme_fabrics 00:08:35.074 rmmod nvme_keyring 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 306749 ']' 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 306749 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 306749 ']' 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 306749 00:08:35.074 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306749 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306749' 00:08:35.333 killing process with pid 306749 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 306749 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 306749 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.333 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.876 00:08:37.876 real 0m19.848s 00:08:37.876 user 0m23.230s 00:08:37.876 sys 0m6.113s 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.876 ************************************ 00:08:37.876 END TEST nvmf_queue_depth 00:08:37.876 ************************************ 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.876 ************************************ 00:08:37.876 START TEST nvmf_target_multipath 00:08:37.876 ************************************ 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.876 * Looking for test storage... 00:08:37.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.876 --rc genhtml_branch_coverage=1 00:08:37.876 --rc genhtml_function_coverage=1 00:08:37.876 --rc genhtml_legend=1 00:08:37.876 --rc geninfo_all_blocks=1 00:08:37.876 --rc geninfo_unexecuted_blocks=1 00:08:37.876 00:08:37.876 ' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.876 --rc genhtml_branch_coverage=1 00:08:37.876 --rc genhtml_function_coverage=1 00:08:37.876 --rc genhtml_legend=1 00:08:37.876 --rc geninfo_all_blocks=1 00:08:37.876 --rc geninfo_unexecuted_blocks=1 00:08:37.876 00:08:37.876 ' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.876 --rc genhtml_branch_coverage=1 00:08:37.876 --rc genhtml_function_coverage=1 00:08:37.876 --rc genhtml_legend=1 00:08:37.876 --rc geninfo_all_blocks=1 00:08:37.876 --rc geninfo_unexecuted_blocks=1 00:08:37.876 00:08:37.876 ' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.876 --rc genhtml_branch_coverage=1 00:08:37.876 --rc genhtml_function_coverage=1 00:08:37.876 --rc genhtml_legend=1 00:08:37.876 --rc geninfo_all_blocks=1 00:08:37.876 --rc geninfo_unexecuted_blocks=1 00:08:37.876 00:08:37.876 ' 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.876 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.877 12:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:44.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:44.444 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:44.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:44.445 Found net devices under 0000:86:00.0: cvl_0_0 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:44.445 Found net devices under 0000:86:00.1: cvl_0_1 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:44.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:44.445 00:08:44.445 --- 10.0.0.2 ping statistics --- 00:08:44.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.445 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:44.445 00:08:44.445 --- 10.0.0.1 ping statistics --- 00:08:44.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.445 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:44.445 only one NIC for nvmf test 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.445 rmmod nvme_tcp 00:08:44.445 rmmod nvme_fabrics 00:08:44.445 rmmod nvme_keyring 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.445 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.825 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.085 00:08:46.085 real 0m8.399s 00:08:46.085 user 0m1.857s 00:08:46.085 sys 0m4.566s 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.085 ************************************ 00:08:46.085 END TEST nvmf_target_multipath 00:08:46.085 ************************************ 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.085 ************************************ 00:08:46.085 START TEST nvmf_zcopy 00:08:46.085 ************************************ 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:46.085 * Looking for test storage... 00:08:46.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.085 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.345 --rc genhtml_branch_coverage=1 00:08:46.345 --rc genhtml_function_coverage=1 00:08:46.345 --rc genhtml_legend=1 00:08:46.345 --rc geninfo_all_blocks=1 00:08:46.345 --rc geninfo_unexecuted_blocks=1 00:08:46.345 00:08:46.345 ' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.345 --rc genhtml_branch_coverage=1 00:08:46.345 --rc genhtml_function_coverage=1 00:08:46.345 --rc genhtml_legend=1 00:08:46.345 --rc geninfo_all_blocks=1 00:08:46.345 --rc geninfo_unexecuted_blocks=1 00:08:46.345 00:08:46.345 ' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.345 --rc genhtml_branch_coverage=1 00:08:46.345 --rc genhtml_function_coverage=1 00:08:46.345 --rc genhtml_legend=1 00:08:46.345 --rc geninfo_all_blocks=1 00:08:46.345 --rc geninfo_unexecuted_blocks=1 00:08:46.345 00:08:46.345 ' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.345 --rc genhtml_branch_coverage=1 00:08:46.345 --rc genhtml_function_coverage=1 00:08:46.345 --rc genhtml_legend=1 00:08:46.345 --rc geninfo_all_blocks=1 00:08:46.345 --rc geninfo_unexecuted_blocks=1 00:08:46.345 00:08:46.345 ' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.345 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.346 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:52.954 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:52.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:52.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:52.955 Found net devices under 0000:86:00.0: cvl_0_0 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.955 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:52.956 Found net devices under 0000:86:00.1: cvl_0_1 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:08:52.956 00:08:52.956 --- 10.0.0.2 ping statistics --- 00:08:52.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.956 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:08:52.956 00:08:52.956 --- 10.0.0.1 ping statistics --- 00:08:52.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.956 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=316280 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 316280 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 316280 ']' 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.956 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.957 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.957 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.957 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.957 [2024-11-20 12:18:35.377095] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:52.957 [2024-11-20 12:18:35.377147] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.957 [2024-11-20 12:18:35.457464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.957 [2024-11-20 12:18:35.498150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.957 [2024-11-20 12:18:35.498188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.957 [2024-11-20 12:18:35.498195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.957 [2024-11-20 12:18:35.498201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.957 [2024-11-20 12:18:35.498211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.957 [2024-11-20 12:18:35.498747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 [2024-11-20 12:18:36.246924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 [2024-11-20 12:18:36.267088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 malloc0 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.217 { 00:08:53.217 "params": { 00:08:53.217 "name": "Nvme$subsystem", 00:08:53.217 "trtype": "$TEST_TRANSPORT", 00:08:53.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.217 "adrfam": "ipv4", 00:08:53.217 "trsvcid": "$NVMF_PORT", 00:08:53.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.217 "hdgst": ${hdgst:-false}, 00:08:53.217 "ddgst": ${ddgst:-false} 00:08:53.217 }, 00:08:53.217 "method": "bdev_nvme_attach_controller" 00:08:53.217 } 00:08:53.217 EOF 00:08:53.217 )") 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:53.217 12:18:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.217 "params": { 00:08:53.217 "name": "Nvme1", 00:08:53.217 "trtype": "tcp", 00:08:53.217 "traddr": "10.0.0.2", 00:08:53.217 "adrfam": "ipv4", 00:08:53.217 "trsvcid": "4420", 00:08:53.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.217 "hdgst": false, 00:08:53.217 "ddgst": false 00:08:53.217 }, 00:08:53.217 "method": "bdev_nvme_attach_controller" 00:08:53.217 }' 00:08:53.476 [2024-11-20 12:18:36.348048] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:08:53.476 [2024-11-20 12:18:36.348092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316433 ] 00:08:53.476 [2024-11-20 12:18:36.423132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.476 [2024-11-20 12:18:36.464434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.735 Running I/O for 10 seconds... 00:08:55.608 8414.00 IOPS, 65.73 MiB/s [2024-11-20T11:18:39.659Z] 8475.00 IOPS, 66.21 MiB/s [2024-11-20T11:18:41.038Z] 8506.00 IOPS, 66.45 MiB/s [2024-11-20T11:18:41.974Z] 8496.50 IOPS, 66.38 MiB/s [2024-11-20T11:18:42.911Z] 8507.60 IOPS, 66.47 MiB/s [2024-11-20T11:18:43.847Z] 8523.50 IOPS, 66.59 MiB/s [2024-11-20T11:18:44.784Z] 8529.43 IOPS, 66.64 MiB/s [2024-11-20T11:18:45.720Z] 8536.25 IOPS, 66.69 MiB/s [2024-11-20T11:18:46.657Z] 8537.00 IOPS, 66.70 MiB/s [2024-11-20T11:18:46.916Z] 8538.40 IOPS, 66.71 MiB/s 00:09:03.800 Latency(us) 00:09:03.800 [2024-11-20T11:18:46.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.800 Verification LBA range: start 0x0 length 0x1000 00:09:03.800 Nvme1n1 : 10.01 8541.84 66.73 0.00 0.00 14942.61 1951.83 24504.77 00:09:03.800 [2024-11-20T11:18:46.916Z] =================================================================================================================== 00:09:03.800 [2024-11-20T11:18:46.916Z] Total : 8541.84 66.73 0.00 0.00 14942.61 1951.83 24504.77 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=318148 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.800 { 00:09:03.800 "params": { 00:09:03.800 "name": "Nvme$subsystem", 00:09:03.800 "trtype": "$TEST_TRANSPORT", 00:09:03.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.800 "adrfam": "ipv4", 00:09:03.800 "trsvcid": "$NVMF_PORT", 00:09:03.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.800 "hdgst": ${hdgst:-false}, 00:09:03.800 "ddgst": ${ddgst:-false} 00:09:03.800 }, 00:09:03.800 "method": "bdev_nvme_attach_controller" 00:09:03.800 } 00:09:03.800 EOF 00:09:03.800 )") 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.800 [2024-11-20 12:18:46.826255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-20 12:18:46.826287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.800 12:18:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.800 "params": { 00:09:03.800 "name": "Nvme1", 00:09:03.800 "trtype": "tcp", 00:09:03.800 "traddr": "10.0.0.2", 00:09:03.800 "adrfam": "ipv4", 00:09:03.800 "trsvcid": "4420", 00:09:03.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.800 "hdgst": false, 00:09:03.800 "ddgst": false 00:09:03.800 }, 00:09:03.800 "method": "bdev_nvme_attach_controller" 00:09:03.800 }' 00:09:03.800 [2024-11-20 12:18:46.838262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-20 12:18:46.838275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-20 12:18:46.850288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-20 12:18:46.850299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-20 12:18:46.862319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-20 12:18:46.862329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-20 12:18:46.865892] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:03.800 [2024-11-20 12:18:46.865935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid318148 ] 00:09:03.800 [2024-11-20 12:18:46.874352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-20 12:18:46.874364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-20 12:18:46.886381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-20 12:18:46.886391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.801 [2024-11-20 12:18:46.898415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.801 [2024-11-20 12:18:46.898426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.801 [2024-11-20 12:18:46.910447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.801 [2024-11-20 12:18:46.910457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.922479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.922489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.934512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.934522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.941093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.060 [2024-11-20 12:18:46.946542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.946553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.958577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.958591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.970606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.970621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.982642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.982656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:46.983042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.060 [2024-11-20 12:18:46.994684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:46.994698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:47.006713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:47.006734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:47.018742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-20 12:18:47.018758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-20 12:18:47.030774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.030786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.042805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.042819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.054836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.054848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.066920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.066939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.078909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.078924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.090941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.090958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.102972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.102984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.115004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.115015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.127031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.127041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.139070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.139083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.151103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.151118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.163134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.163146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.061 [2024-11-20 12:18:47.175166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.061 [2024-11-20 12:18:47.175178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.187204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.187222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.199238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.199253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.211271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.211289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 Running I/O for 5 seconds... 00:09:04.319 [2024-11-20 12:18:47.223300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.223312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.235804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.235825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.251575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.251596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.265890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.265911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.279838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.279859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.289278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.319 [2024-11-20 12:18:47.289298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.319 [2024-11-20 12:18:47.298917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.298936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.308496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.308516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.317953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.317972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.332897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.332916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.343725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.343745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.353095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.353114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.368350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.368371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.378926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.378953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.387816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.387836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.397681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.397704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.412554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.412578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-20 12:18:47.428059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-20 12:18:47.428079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.437634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.437653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.446403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.446422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.461492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.461512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.476872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.476892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.491073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.491095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.502155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.502175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.516644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.516664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.525680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.525700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.535025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.535044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.550025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.550045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.561060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.561079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.570498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.570517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.580 [2024-11-20 12:18:47.579786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.580 [2024-11-20 12:18:47.579805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.589097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.589116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.603905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.603924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.613278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.613297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.623382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.623401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.638050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.638078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.647265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.647284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.661723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.661743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.670828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.670847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.680355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.680374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.581 [2024-11-20 12:18:47.689825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.581 [2024-11-20 12:18:47.689842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.699121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.699139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.713801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.713820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.721579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.721598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.735970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.735991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.749539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.749558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.764076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.764099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.775087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.775107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.784626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.784645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.793382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.793400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.802842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.802860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.812186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.812206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.827035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.827055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.836235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.836254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.845903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.845922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.855593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.855612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.865295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.865313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.879818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.879836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.888861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.888879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.898317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.898336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.907647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.907665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.916402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.916420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.931221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.931240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.941787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.941806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.841 [2024-11-20 12:18:47.951177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.841 [2024-11-20 12:18:47.951207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:47.959929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:47.959954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:47.968853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:47.968872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:47.984422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:47.984441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:47.999626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:47.999645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.008573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.008591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.017315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.017334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.026446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.026466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.041050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.041069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.054055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.054078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.063099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.063119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.072302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.072320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.082448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.082468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.097312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.097331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.108065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.108084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.117542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.117561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.132517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.132536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.101 [2024-11-20 12:18:48.143122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.101 [2024-11-20 12:18:48.143141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.102 [2024-11-20 12:18:48.157427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.102 [2024-11-20 12:18:48.157446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.102 [2024-11-20 12:18:48.171878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.102 [2024-11-20 12:18:48.171897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.102 [2024-11-20 12:18:48.183034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.102 [2024-11-20 12:18:48.183053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.102 [2024-11-20 12:18:48.197690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.102 [2024-11-20 12:18:48.197709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.102 [2024-11-20 12:18:48.211566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.102 [2024-11-20 12:18:48.211585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.360 16296.00 IOPS, 127.31 MiB/s [2024-11-20T11:18:48.476Z] [2024-11-20 12:18:48.226036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.360 [2024-11-20 12:18:48.226055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.240136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.240155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.254252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.254271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.268371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.268391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.279680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.279699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.294508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.294527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.310012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.310031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.324931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.324955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.340080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.340100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.349443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.349462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.363932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.363958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.373417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.373436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.382444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.382463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.391907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.391926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.401210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.401229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.410812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.410831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.420249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.420268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.435082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.435101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.445990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.446008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.460823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.460842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.361 [2024-11-20 12:18:48.471491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.361 [2024-11-20 12:18:48.471511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.486329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.486348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.500651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.500669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.512228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.512252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.521709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.521727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.536104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.536124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.549097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.549118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.558281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.558300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.567593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.567613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.576955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.576975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.586387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.586407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.600933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.600959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.613964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.613983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.623002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.623020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.637637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.637657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.648313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.648333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.657205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.657224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.666437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.666456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.675575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.675594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.685086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.685105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.699663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.699683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.708864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.708883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.718300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.718324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.620 [2024-11-20 12:18:48.727810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.620 [2024-11-20 12:18:48.727830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.737194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.737216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.752361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.752388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.768015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.768034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.777587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.777606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.786490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.786509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.795195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.795224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.809838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.809858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.818825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.818843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.827593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.827612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.837031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.837051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.846097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.846116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.860702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.860722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.875102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.875121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.886206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.886226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.895754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.895773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.905064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.905083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.919747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.919765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.928809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.928832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.943428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.943447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.954359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.954378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.963434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.963452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.978560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.978579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.879 [2024-11-20 12:18:48.987717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.879 [2024-11-20 12:18:48.987736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:48.997121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:48.997140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.007035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.007053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.016004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.016023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.030875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.030893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.045226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.045244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.056323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.056342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.065382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.065400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.074204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.074223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.089341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.089360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.100321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.100339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.109455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.109473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.118726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.118745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.128185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.128204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.143052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.143075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.156306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.156325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.170708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.170728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.184983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.185003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.196061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.196081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.210606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.210626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.218248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.218267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 16400.00 IOPS, 128.12 MiB/s [2024-11-20T11:18:49.254Z] [2024-11-20 12:18:49.227473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.227491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.138 [2024-11-20 12:18:49.242235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.138 [2024-11-20 12:18:49.242254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.256146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.256166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.270335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.270354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.284384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.284404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.293409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.293428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.302792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.302811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.312161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.312180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.321600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.321619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.336310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.336329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.345600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.345618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.354580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.354598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.363979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.363997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.378768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.378787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.388052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.388073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.397482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.397501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.411795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.411815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.425731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.425750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.440108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.440128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.450746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.450765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.465101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.465120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.476396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.476415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.485410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.485429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.500189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.500208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.397 [2024-11-20 12:18:49.510963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.397 [2024-11-20 12:18:49.510983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.519952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.519971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.528855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.528873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.538167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.538186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.553158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.553177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.560726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.560745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.569768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.569787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.579164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.579184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.588728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.588747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.603507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.603526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.617311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.617329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.626334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.626352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.635192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.635211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.644818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.644836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.659356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.659374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.672748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.672767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.680330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.680349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.690799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.690818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.699741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.699759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.714442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.714461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.728450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.728469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.743305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.743324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.754673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.754691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.657 [2024-11-20 12:18:49.764182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.657 [2024-11-20 12:18:49.764201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.778757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.778776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.787818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.787841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.802553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.802572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.810191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.810209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.819536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.819555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.834275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.834295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.843441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.843460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.858787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.858805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.874001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.874020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.883168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.883186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.897800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.897819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.911344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.911364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.920853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.920872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.930381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.930400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.939302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.939322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.954044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.954064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.963295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.963315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.972889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.972908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.987742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.987761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:49.998302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:49.998321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:50.013460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:50.013484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.915 [2024-11-20 12:18:50.024943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.915 [2024-11-20 12:18:50.024971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.034506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.034526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.043295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.043313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.058189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.058208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.068632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.068651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.082645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.082666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.096782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.096803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.111155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.111175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.120247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.120267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.135554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.135574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.150312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.150332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.164836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.164856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.174160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.174180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.183975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.183997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.198790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.198810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.213105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.213125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.224322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.224342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 16394.33 IOPS, 128.08 MiB/s [2024-11-20T11:18:50.290Z] [2024-11-20 12:18:50.238761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.238780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.247816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.247841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.263079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.263099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.274038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.274058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.174 [2024-11-20 12:18:50.288421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.174 [2024-11-20 12:18:50.288442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.302129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.302148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.311134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.311152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.326086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.326105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.340261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.340280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.356151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.356170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.365842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.365860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.374779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.374797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.389414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.389433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.398578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.398597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.408050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.408069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.417789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.417808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.427432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.427451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.433 [2024-11-20 12:18:50.442594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.433 [2024-11-20 12:18:50.442613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.453583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.453603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.462693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.462711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.472117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.472140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.481846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.481865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.496848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.496867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.504651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.504669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.513791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.513810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.523202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.523222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.434 [2024-11-20 12:18:50.538454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.434 [2024-11-20 12:18:50.538473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.554195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.554215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.568792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.568811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.580161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.580182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.589002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.589020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.598212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.598230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.612773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.612793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.627041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.627060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.638048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.638067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.652508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.652527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.661432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.661450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.676198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.676217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.691335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.691354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.705684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.705702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.714576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.714595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.728960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.728979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.742897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.742917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.752157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.752186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.760944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.760969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.770824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.770843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.785568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.785587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.693 [2024-11-20 12:18:50.795953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.693 [2024-11-20 12:18:50.795972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.810392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.810412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.819251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.819269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.828165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.828184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.837649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.837668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.852709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.852728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.863239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.863258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.872374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.872393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.881826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.881844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.896569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.896587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.910324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.910343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.919555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.919575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.929003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.929022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.937714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.937733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.946576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.946596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.961401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.961420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.972221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.972241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:50.986350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:50.986369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:51.000267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:51.000286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:51.011360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:51.011379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:51.025992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:51.026012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:51.040247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:51.040266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:51.050808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:51.050827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.953 [2024-11-20 12:18:51.060010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.953 [2024-11-20 12:18:51.060029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.212 [2024-11-20 12:18:51.069629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.212 [2024-11-20 12:18:51.069649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.212 [2024-11-20 12:18:51.084297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.212 [2024-11-20 12:18:51.084315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.212 [2024-11-20 12:18:51.093267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.212 [2024-11-20 12:18:51.093286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.212 [2024-11-20 12:18:51.107559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.212 [2024-11-20 12:18:51.107578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.212 [2024-11-20 12:18:51.121892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.212 [2024-11-20 12:18:51.121910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.212 [2024-11-20 12:18:51.137468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.137488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.151744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.151763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.160832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.160851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.175311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.175329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.184799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.184817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.199493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.199513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.213366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.213385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.227844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.227863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 16416.00 IOPS, 128.25 MiB/s [2024-11-20T11:18:51.329Z] [2024-11-20 12:18:51.243126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.243145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.257643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.257662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.266600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.266619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.281571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.281591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.289253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.289274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.298650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.298669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.308079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.308099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.213 [2024-11-20 12:18:51.317221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.213 [2024-11-20 12:18:51.317242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.331913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.331933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.345871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.345891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.359807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.359827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.368847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.368875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.378256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.378275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.392843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.392863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.406850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.406869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.421451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.421470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.436724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.436744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.446432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.446451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.460624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.460643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.472 [2024-11-20 12:18:51.474296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.472 [2024-11-20 12:18:51.474316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.488163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.488183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.497254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.497273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.505907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.505926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.520620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.520639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.529682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.529701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.544220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.544240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.557724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.557744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.571874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.571894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.473 [2024-11-20 12:18:51.586034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.473 [2024-11-20 12:18:51.586054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.600000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.600020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.609206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.609231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.623679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.623699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.637893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.637913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.648871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.648891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.663825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.663846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.679127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.679146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.693581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.693600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.705145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.705164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.719366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.719385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.733382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.733401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.747521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.747540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.758665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.758685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.773223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.773241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.784693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.784711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.799007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.799026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.812769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.812788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.827025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.827045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.732 [2024-11-20 12:18:51.841095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.732 [2024-11-20 12:18:51.841114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.855712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.855733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.871050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.871076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.880898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.880917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.895525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.895545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.909916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.909935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.920541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.920560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.930177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.930195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.944624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.944643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.958973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.958991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.973176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.973195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:51.986974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:51.986993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.001160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.001178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.014887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.014906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.028916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.028935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.043264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.043283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.057708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.057726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.069106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.069125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.083288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.083307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.992 [2024-11-20 12:18:52.097374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.992 [2024-11-20 12:18:52.097394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.111783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.111802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.122312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.122331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.136741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.136761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.149994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.150014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.164811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.164830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.180033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.180053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.251 [2024-11-20 12:18:52.194375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.251 [2024-11-20 12:18:52.194394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.208366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.208385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.222239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.222258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 16430.00 IOPS, 128.36 MiB/s [2024-11-20T11:18:52.368Z] [2024-11-20 12:18:52.235938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.235965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 00:09:09.252 Latency(us) 00:09:09.252 [2024-11-20T11:18:52.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.252 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:09.252 Nvme1n1 : 5.01 16432.79 128.38 0.00 0.00 7781.93 3291.05 15500.69 00:09:09.252 [2024-11-20T11:18:52.368Z] =================================================================================================================== 00:09:09.252 [2024-11-20T11:18:52.368Z] Total : 16432.79 128.38 0.00 0.00 7781.93 3291.05 15500.69 00:09:09.252 [2024-11-20 12:18:52.244976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.244994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.256990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.257004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.269031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.269047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.281053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.281068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.293089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.293103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.305116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.305128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.317151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.317165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.329185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.329203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.341213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.341226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.353240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.353250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.252 [2024-11-20 12:18:52.365277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.252 [2024-11-20 12:18:52.365289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.512 [2024-11-20 12:18:52.377308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.512 [2024-11-20 12:18:52.377322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.512 [2024-11-20 12:18:52.389339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.512 [2024-11-20 12:18:52.389349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.512 [2024-11-20 12:18:52.401375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.512 [2024-11-20 12:18:52.401386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (318148) - No such process 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 318148 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 delay0 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.512 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:09.512 [2024-11-20 12:18:52.547633] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:16.076 [2024-11-20 12:18:58.773024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0070 is same with the state(6) to be set 00:09:16.076 [2024-11-20 12:18:58.773062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0070 is same with the state(6) to be set 00:09:16.076 Initializing NVMe Controllers 00:09:16.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:16.076 Initialization complete. Launching workers. 00:09:16.076 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 753 00:09:16.076 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1033, failed to submit 40 00:09:16.076 success 857, unsuccessful 176, failed 0 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.076 rmmod nvme_tcp 00:09:16.076 rmmod nvme_fabrics 00:09:16.076 rmmod nvme_keyring 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 316280 ']' 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 316280 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 316280 ']' 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 316280 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 316280 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 316280' 00:09:16.076 killing process with pid 316280 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 316280 00:09:16.076 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 316280 00:09:16.076 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.076 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.076 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.076 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:16.076 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.077 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.615 00:09:18.615 real 0m32.092s 00:09:18.615 user 0m42.689s 00:09:18.615 sys 0m11.275s 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.615 ************************************ 00:09:18.615 END TEST nvmf_zcopy 00:09:18.615 ************************************ 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.615 ************************************ 00:09:18.615 START TEST nvmf_nmic 00:09:18.615 ************************************ 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:18.615 * Looking for test storage... 00:09:18.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.615 --rc genhtml_branch_coverage=1 00:09:18.615 --rc genhtml_function_coverage=1 00:09:18.615 --rc genhtml_legend=1 00:09:18.615 --rc geninfo_all_blocks=1 00:09:18.615 --rc geninfo_unexecuted_blocks=1 00:09:18.615 00:09:18.615 ' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.615 --rc genhtml_branch_coverage=1 00:09:18.615 --rc genhtml_function_coverage=1 00:09:18.615 --rc genhtml_legend=1 00:09:18.615 --rc geninfo_all_blocks=1 00:09:18.615 --rc geninfo_unexecuted_blocks=1 00:09:18.615 00:09:18.615 ' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.615 --rc genhtml_branch_coverage=1 00:09:18.615 --rc genhtml_function_coverage=1 00:09:18.615 --rc genhtml_legend=1 00:09:18.615 --rc geninfo_all_blocks=1 00:09:18.615 --rc geninfo_unexecuted_blocks=1 00:09:18.615 00:09:18.615 ' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.615 --rc genhtml_branch_coverage=1 00:09:18.615 --rc genhtml_function_coverage=1 00:09:18.615 --rc genhtml_legend=1 00:09:18.615 --rc geninfo_all_blocks=1 00:09:18.615 --rc geninfo_unexecuted_blocks=1 00:09:18.615 00:09:18.615 ' 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.615 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.616 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.201 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:25.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:25.202 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:25.202 Found net devices under 0000:86:00.0: cvl_0_0 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:25.202 Found net devices under 0000:86:00.1: cvl_0_1 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.202 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:09:25.203 00:09:25.203 --- 10.0.0.2 ping statistics --- 00:09:25.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.203 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:25.203 00:09:25.203 --- 10.0.0.1 ping statistics --- 00:09:25.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.203 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=323744 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 323744 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 323744 ']' 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 [2024-11-20 12:19:07.445100] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:25.203 [2024-11-20 12:19:07.445148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.203 [2024-11-20 12:19:07.523473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.203 [2024-11-20 12:19:07.567356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.203 [2024-11-20 12:19:07.567393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.203 [2024-11-20 12:19:07.567400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.203 [2024-11-20 12:19:07.567407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.203 [2024-11-20 12:19:07.567412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.203 [2024-11-20 12:19:07.568845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.203 [2024-11-20 12:19:07.568971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.203 [2024-11-20 12:19:07.569080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.203 [2024-11-20 12:19:07.569080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 [2024-11-20 12:19:07.706647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 Malloc0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 [2024-11-20 12:19:07.769711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:25.203 test case1: single bdev can't be used in multiple subsystems 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.203 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.204 [2024-11-20 12:19:07.797630] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:25.204 [2024-11-20 12:19:07.797649] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:25.204 [2024-11-20 12:19:07.797657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.204 request: 00:09:25.204 { 00:09:25.204 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:25.204 "namespace": { 00:09:25.204 "bdev_name": "Malloc0", 00:09:25.204 "no_auto_visible": false 00:09:25.204 }, 00:09:25.204 "method": "nvmf_subsystem_add_ns", 00:09:25.204 "req_id": 1 00:09:25.204 } 00:09:25.204 Got JSON-RPC error response 00:09:25.204 response: 00:09:25.204 { 00:09:25.204 "code": -32602, 00:09:25.204 "message": "Invalid parameters" 00:09:25.204 } 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:25.204 Adding namespace failed - expected result. 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:25.204 test case2: host connect to nvmf target in multiple paths 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.204 [2024-11-20 12:19:07.809761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.204 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.141 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:27.078 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.078 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:27.078 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.078 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:27.078 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:28.983 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:28.983 [global] 00:09:28.983 thread=1 00:09:28.983 invalidate=1 00:09:28.983 rw=write 00:09:28.983 time_based=1 00:09:28.983 runtime=1 00:09:28.983 ioengine=libaio 00:09:28.983 direct=1 00:09:28.983 bs=4096 00:09:28.983 iodepth=1 00:09:28.983 norandommap=0 00:09:28.983 numjobs=1 00:09:28.983 00:09:28.983 verify_dump=1 00:09:28.983 verify_backlog=512 00:09:28.983 verify_state_save=0 00:09:28.983 do_verify=1 00:09:28.983 verify=crc32c-intel 00:09:28.983 [job0] 00:09:28.983 filename=/dev/nvme0n1 00:09:29.242 Could not set queue depth (nvme0n1) 00:09:29.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.242 fio-3.35 00:09:29.242 Starting 1 thread 00:09:30.619 00:09:30.619 job0: (groupid=0, jobs=1): err= 0: pid=324692: Wed Nov 20 12:19:13 2024 00:09:30.619 read: IOPS=611, BW=2446KiB/s (2504kB/s)(2536KiB/1037msec) 00:09:30.619 slat (nsec): min=6549, max=30129, avg=7862.94, stdev=2785.70 00:09:30.619 clat (usec): min=183, max=41037, avg=1387.77, stdev=6767.52 00:09:30.619 lat (usec): min=191, max=41059, avg=1395.63, stdev=6769.99 00:09:30.619 clat percentiles (usec): 00:09:30.619 | 1.00th=[ 192], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:09:30.619 | 30.00th=[ 227], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:09:30.619 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 253], 00:09:30.619 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:30.619 | 99.99th=[41157] 00:09:30.619 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:09:30.619 slat (nsec): min=9243, max=47526, avg=10359.05, stdev=1696.14 00:09:30.619 clat (usec): min=112, max=411, avg=135.07, stdev=13.22 00:09:30.619 lat (usec): min=128, max=459, avg=145.43, stdev=14.07 00:09:30.619 clat percentiles (usec): 00:09:30.619 | 1.00th=[ 123], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 130], 00:09:30.619 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:09:30.619 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 157], 00:09:30.619 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 255], 99.95th=[ 412], 00:09:30.619 | 99.99th=[ 412] 00:09:30.619 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:30.619 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:30.619 lat (usec) : 250=97.83%, 500=1.03%, 750=0.06% 00:09:30.619 lat (msec) : 50=1.09% 00:09:30.619 cpu : usr=0.58%, sys=1.64%, ctx=1658, majf=0, minf=1 00:09:30.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.619 issued rwts: total=634,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.619 00:09:30.619 Run status group 0 (all jobs): 00:09:30.619 READ: bw=2446KiB/s (2504kB/s), 2446KiB/s-2446KiB/s (2504kB/s-2504kB/s), io=2536KiB (2597kB), run=1037-1037msec 00:09:30.619 WRITE: bw=3950KiB/s (4045kB/s), 3950KiB/s-3950KiB/s (4045kB/s-4045kB/s), io=4096KiB (4194kB), run=1037-1037msec 00:09:30.619 00:09:30.619 Disk stats (read/write): 00:09:30.619 nvme0n1: ios=680/1024, merge=0/0, ticks=724/139, in_queue=863, util=91.48% 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.619 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.878 rmmod nvme_tcp 00:09:30.878 rmmod nvme_fabrics 00:09:30.878 rmmod nvme_keyring 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 323744 ']' 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 323744 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 323744 ']' 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 323744 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323744 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.878 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323744' 00:09:30.878 killing process with pid 323744 00:09:30.879 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 323744 00:09:30.879 12:19:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 323744 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.138 12:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.155 00:09:33.155 real 0m14.909s 00:09:33.155 user 0m32.918s 00:09:33.155 sys 0m5.268s 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.155 ************************************ 00:09:33.155 END TEST nvmf_nmic 00:09:33.155 ************************************ 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.155 ************************************ 00:09:33.155 START TEST nvmf_fio_target 00:09:33.155 ************************************ 00:09:33.155 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.416 * Looking for test storage... 00:09:33.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.416 --rc genhtml_branch_coverage=1 00:09:33.416 --rc genhtml_function_coverage=1 00:09:33.416 --rc genhtml_legend=1 00:09:33.416 --rc geninfo_all_blocks=1 00:09:33.416 --rc geninfo_unexecuted_blocks=1 00:09:33.416 00:09:33.416 ' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.416 --rc genhtml_branch_coverage=1 00:09:33.416 --rc genhtml_function_coverage=1 00:09:33.416 --rc genhtml_legend=1 00:09:33.416 --rc geninfo_all_blocks=1 00:09:33.416 --rc geninfo_unexecuted_blocks=1 00:09:33.416 00:09:33.416 ' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.416 --rc genhtml_branch_coverage=1 00:09:33.416 --rc genhtml_function_coverage=1 00:09:33.416 --rc genhtml_legend=1 00:09:33.416 --rc geninfo_all_blocks=1 00:09:33.416 --rc geninfo_unexecuted_blocks=1 00:09:33.416 00:09:33.416 ' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.416 --rc genhtml_branch_coverage=1 00:09:33.416 --rc genhtml_function_coverage=1 00:09:33.416 --rc genhtml_legend=1 00:09:33.416 --rc geninfo_all_blocks=1 00:09:33.416 --rc geninfo_unexecuted_blocks=1 00:09:33.416 00:09:33.416 ' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.416 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.417 12:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.982 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.982 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:39.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:39.983 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:39.983 Found net devices under 0000:86:00.0: cvl_0_0 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:39.983 Found net devices under 0000:86:00.1: cvl_0_1 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.983 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:39.984 00:09:39.984 --- 10.0.0.2 ping statistics --- 00:09:39.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.984 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:09:39.984 00:09:39.984 --- 10.0.0.1 ping statistics --- 00:09:39.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.984 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=328536 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 328536 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 328536 ']' 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.984 [2024-11-20 12:19:22.432557] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:09:39.984 [2024-11-20 12:19:22.432604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.984 [2024-11-20 12:19:22.513361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.984 [2024-11-20 12:19:22.558138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.984 [2024-11-20 12:19:22.558174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.984 [2024-11-20 12:19:22.558182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.984 [2024-11-20 12:19:22.558187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.984 [2024-11-20 12:19:22.558192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.984 [2024-11-20 12:19:22.559643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.984 [2024-11-20 12:19:22.559668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.984 [2024-11-20 12:19:22.559755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.984 [2024-11-20 12:19:22.559756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:39.984 [2024-11-20 12:19:22.861990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.984 12:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.243 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:40.243 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.243 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:40.243 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.501 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:40.501 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.760 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:40.760 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:41.018 12:19:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.276 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:41.276 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.276 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:41.276 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.535 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:41.535 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:41.794 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:42.053 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:42.053 12:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.312 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:42.312 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.312 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.571 [2024-11-20 12:19:25.557840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.571 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:42.829 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:43.088 12:19:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.025 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:44.025 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:44.025 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.025 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:44.025 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:44.025 12:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:46.558 12:19:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:46.558 [global] 00:09:46.558 thread=1 00:09:46.558 invalidate=1 00:09:46.558 rw=write 00:09:46.558 time_based=1 00:09:46.558 runtime=1 00:09:46.558 ioengine=libaio 00:09:46.558 direct=1 00:09:46.558 bs=4096 00:09:46.558 iodepth=1 00:09:46.558 norandommap=0 00:09:46.558 numjobs=1 00:09:46.558 00:09:46.558 verify_dump=1 00:09:46.558 verify_backlog=512 00:09:46.558 verify_state_save=0 00:09:46.558 do_verify=1 00:09:46.558 verify=crc32c-intel 00:09:46.558 [job0] 00:09:46.558 filename=/dev/nvme0n1 00:09:46.558 [job1] 00:09:46.558 filename=/dev/nvme0n2 00:09:46.558 [job2] 00:09:46.558 filename=/dev/nvme0n3 00:09:46.558 [job3] 00:09:46.558 filename=/dev/nvme0n4 00:09:46.558 Could not set queue depth (nvme0n1) 00:09:46.558 Could not set queue depth (nvme0n2) 00:09:46.558 Could not set queue depth (nvme0n3) 00:09:46.558 Could not set queue depth (nvme0n4) 00:09:46.558 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.559 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.559 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.559 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.559 fio-3.35 00:09:46.559 Starting 4 threads 00:09:47.938 00:09:47.938 job0: (groupid=0, jobs=1): err= 0: pid=329920: Wed Nov 20 12:19:30 2024 00:09:47.938 read: IOPS=2022, BW=8091KiB/s (8285kB/s)(8204KiB/1014msec) 00:09:47.938 slat (nsec): min=2263, max=26044, avg=5680.72, stdev=3214.76 00:09:47.938 clat (usec): min=141, max=40938, avg=283.96, stdev=1551.18 00:09:47.938 lat (usec): min=143, max=40963, avg=289.64, stdev=1551.75 00:09:47.938 clat percentiles (usec): 00:09:47.938 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:09:47.938 | 30.00th=[ 190], 40.00th=[ 202], 50.00th=[ 217], 60.00th=[ 235], 00:09:47.938 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 302], 00:09:47.938 | 99.00th=[ 412], 99.50th=[ 486], 99.90th=[40633], 99.95th=[41157], 00:09:47.938 | 99.99th=[41157] 00:09:47.938 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec); 0 zone resets 00:09:47.938 slat (nsec): min=3349, max=40342, avg=8565.10, stdev=4629.90 00:09:47.938 clat (usec): min=97, max=365, avg=151.33, stdev=43.01 00:09:47.938 lat (usec): min=100, max=377, avg=159.90, stdev=45.57 00:09:47.938 clat percentiles (usec): 00:09:47.938 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 120], 00:09:47.938 | 30.00th=[ 125], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 141], 00:09:47.938 | 70.00th=[ 149], 80.00th=[ 190], 90.00th=[ 231], 95.00th=[ 243], 00:09:47.938 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 338], 99.95th=[ 347], 00:09:47.938 | 99.99th=[ 367] 00:09:47.938 bw ( KiB/s): min= 9992, max=10488, per=57.67%, avg=10240.00, stdev=350.72, samples=2 00:09:47.938 iops : min= 2498, max= 2622, avg=2560.00, stdev=87.68, samples=2 00:09:47.938 lat (usec) : 100=0.09%, 250=86.19%, 500=13.58%, 750=0.09% 00:09:47.938 lat (msec) : 50=0.07% 00:09:47.938 cpu : usr=2.37%, sys=5.43%, ctx=4613, majf=0, minf=1 00:09:47.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.938 issued rwts: total=2051,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.938 job1: (groupid=0, jobs=1): err= 0: pid=329938: Wed Nov 20 12:19:30 2024 00:09:47.938 read: IOPS=864, BW=3457KiB/s (3540kB/s)(3460KiB/1001msec) 00:09:47.938 slat (nsec): min=7324, max=37628, avg=8695.12, stdev=2498.48 00:09:47.938 clat (usec): min=160, max=41410, avg=881.73, stdev=5145.18 00:09:47.938 lat (usec): min=168, max=41419, avg=890.43, stdev=5146.76 00:09:47.938 clat percentiles (usec): 00:09:47.938 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:47.938 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 225], 00:09:47.938 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 302], 00:09:47.938 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:47.938 | 99.99th=[41157] 00:09:47.938 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:47.938 slat (nsec): min=11043, max=43327, avg=12365.26, stdev=2040.65 00:09:47.938 clat (usec): min=117, max=415, avg=206.23, stdev=40.93 00:09:47.938 lat (usec): min=129, max=429, avg=218.60, stdev=41.12 00:09:47.938 clat percentiles (usec): 00:09:47.938 | 1.00th=[ 123], 5.00th=[ 131], 10.00th=[ 139], 20.00th=[ 174], 00:09:47.938 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 212], 60.00th=[ 225], 00:09:47.938 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:09:47.938 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 400], 99.95th=[ 416], 00:09:47.938 | 99.99th=[ 416] 00:09:47.938 bw ( KiB/s): min= 8192, max= 8192, per=46.13%, avg=8192.00, stdev= 0.00, samples=1 00:09:47.938 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:47.938 lat (usec) : 250=83.27%, 500=15.99% 00:09:47.938 lat (msec) : 50=0.74% 00:09:47.938 cpu : usr=1.90%, sys=2.80%, ctx=1890, majf=0, minf=1 00:09:47.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.938 issued rwts: total=865,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.938 job2: (groupid=0, jobs=1): err= 0: pid=329947: Wed Nov 20 12:19:30 2024 00:09:47.938 read: IOPS=317, BW=1269KiB/s (1299kB/s)(1308KiB/1031msec) 00:09:47.938 slat (nsec): min=6863, max=24010, avg=8612.32, stdev=3822.37 00:09:47.938 clat (usec): min=175, max=41555, avg=2756.28, stdev=9828.35 00:09:47.938 lat (usec): min=183, max=41564, avg=2764.89, stdev=9828.58 00:09:47.938 clat percentiles (usec): 00:09:47.938 | 1.00th=[ 192], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:09:47.938 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:09:47.938 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[40633], 00:09:47.938 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:47.938 | 99.99th=[41681] 00:09:47.938 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:09:47.938 slat (nsec): min=9850, max=38418, avg=10973.82, stdev=1541.80 00:09:47.938 clat (usec): min=124, max=366, avg=232.89, stdev=30.12 00:09:47.938 lat (usec): min=134, max=400, avg=243.86, stdev=30.31 00:09:47.938 clat percentiles (usec): 00:09:47.938 | 1.00th=[ 167], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 210], 00:09:47.938 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:09:47.939 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 285], 00:09:47.939 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 367], 99.95th=[ 367], 00:09:47.939 | 99.99th=[ 367] 00:09:47.939 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.939 lat (usec) : 250=70.68%, 500=26.94% 00:09:47.939 lat (msec) : 50=2.38% 00:09:47.939 cpu : usr=0.68%, sys=0.49%, ctx=840, majf=0, minf=1 00:09:47.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.939 issued rwts: total=327,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.939 job3: (groupid=0, jobs=1): err= 0: pid=329948: Wed Nov 20 12:19:30 2024 00:09:47.939 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:09:47.939 slat (nsec): min=10464, max=25041, avg=22245.96, stdev=2955.29 00:09:47.939 clat (usec): min=40538, max=41080, avg=40942.68, stdev=106.27 00:09:47.939 lat (usec): min=40549, max=41102, avg=40964.93, stdev=108.25 00:09:47.939 clat percentiles (usec): 00:09:47.939 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:47.939 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:47.939 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:47.939 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:47.939 | 99.99th=[41157] 00:09:47.939 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:47.939 slat (nsec): min=10494, max=42710, avg=11879.55, stdev=2543.02 00:09:47.939 clat (usec): min=145, max=388, avg=172.35, stdev=22.36 00:09:47.939 lat (usec): min=157, max=400, avg=184.23, stdev=22.89 00:09:47.939 clat percentiles (usec): 00:09:47.939 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:09:47.939 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:09:47.939 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 235], 00:09:47.939 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 388], 99.95th=[ 388], 00:09:47.939 | 99.99th=[ 388] 00:09:47.939 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.939 lat (usec) : 250=95.33%, 500=0.37% 00:09:47.939 lat (msec) : 50=4.30% 00:09:47.939 cpu : usr=0.48%, sys=0.87%, ctx=535, majf=0, minf=2 00:09:47.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.939 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.939 00:09:47.939 Run status group 0 (all jobs): 00:09:47.939 READ: bw=12.3MiB/s (12.9MB/s), 88.6KiB/s-8091KiB/s (90.8kB/s-8285kB/s), io=12.8MiB (13.4MB), run=1001-1038msec 00:09:47.939 WRITE: bw=17.3MiB/s (18.2MB/s), 1973KiB/s-9.86MiB/s (2020kB/s-10.3MB/s), io=18.0MiB (18.9MB), run=1001-1038msec 00:09:47.939 00:09:47.939 Disk stats (read/write): 00:09:47.939 nvme0n1: ios=2097/2272, merge=0/0, ticks=676/326, in_queue=1002, util=85.57% 00:09:47.939 nvme0n2: ios=546/1024, merge=0/0, ticks=1515/200, in_queue=1715, util=89.74% 00:09:47.939 nvme0n3: ios=338/512, merge=0/0, ticks=1597/120, in_queue=1717, util=93.53% 00:09:47.939 nvme0n4: ios=75/512, merge=0/0, ticks=809/79, in_queue=888, util=95.16% 00:09:47.939 12:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:47.939 [global] 00:09:47.939 thread=1 00:09:47.939 invalidate=1 00:09:47.939 rw=randwrite 00:09:47.939 time_based=1 00:09:47.939 runtime=1 00:09:47.939 ioengine=libaio 00:09:47.939 direct=1 00:09:47.939 bs=4096 00:09:47.939 iodepth=1 00:09:47.939 norandommap=0 00:09:47.939 numjobs=1 00:09:47.939 00:09:47.939 verify_dump=1 00:09:47.939 verify_backlog=512 00:09:47.939 verify_state_save=0 00:09:47.939 do_verify=1 00:09:47.939 verify=crc32c-intel 00:09:47.939 [job0] 00:09:47.939 filename=/dev/nvme0n1 00:09:47.939 [job1] 00:09:47.939 filename=/dev/nvme0n2 00:09:47.939 [job2] 00:09:47.939 filename=/dev/nvme0n3 00:09:47.939 [job3] 00:09:47.939 filename=/dev/nvme0n4 00:09:47.939 Could not set queue depth (nvme0n1) 00:09:47.939 Could not set queue depth (nvme0n2) 00:09:47.939 Could not set queue depth (nvme0n3) 00:09:47.939 Could not set queue depth (nvme0n4) 00:09:47.939 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.939 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.939 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.939 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.939 fio-3.35 00:09:47.939 Starting 4 threads 00:09:49.317 00:09:49.317 job0: (groupid=0, jobs=1): err= 0: pid=330323: Wed Nov 20 12:19:32 2024 00:09:49.317 read: IOPS=24, BW=96.2KiB/s (98.6kB/s)(100KiB/1039msec) 00:09:49.317 slat (nsec): min=9402, max=25382, avg=20292.24, stdev=4845.54 00:09:49.317 clat (usec): min=207, max=41099, avg=37693.15, stdev=11278.75 00:09:49.317 lat (usec): min=220, max=41119, avg=37713.44, stdev=11279.29 00:09:49.317 clat percentiles (usec): 00:09:49.317 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[40633], 20.00th=[40633], 00:09:49.317 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.317 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.317 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.317 | 99.99th=[41157] 00:09:49.317 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:09:49.317 slat (nsec): min=10331, max=35104, avg=11748.78, stdev=1840.86 00:09:49.317 clat (usec): min=141, max=269, avg=171.28, stdev=16.60 00:09:49.317 lat (usec): min=152, max=292, avg=183.03, stdev=17.01 00:09:49.317 clat percentiles (usec): 00:09:49.317 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:09:49.317 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:49.317 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 202], 00:09:49.317 | 99.00th=[ 231], 99.50th=[ 251], 99.90th=[ 269], 99.95th=[ 269], 00:09:49.317 | 99.99th=[ 269] 00:09:49.317 bw ( KiB/s): min= 4096, max= 4096, per=20.78%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.317 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.317 lat (usec) : 250=95.16%, 500=0.56% 00:09:49.317 lat (msec) : 50=4.28% 00:09:49.317 cpu : usr=0.48%, sys=0.77%, ctx=539, majf=0, minf=1 00:09:49.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.317 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.317 job1: (groupid=0, jobs=1): err= 0: pid=330324: Wed Nov 20 12:19:32 2024 00:09:49.317 read: IOPS=1006, BW=4027KiB/s (4124kB/s)(4144KiB/1029msec) 00:09:49.317 slat (nsec): min=6882, max=44412, avg=8222.49, stdev=2138.71 00:09:49.317 clat (usec): min=173, max=41199, avg=696.73, stdev=4363.12 00:09:49.317 lat (usec): min=181, max=41222, avg=704.95, stdev=4364.50 00:09:49.317 clat percentiles (usec): 00:09:49.317 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:09:49.317 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 227], 00:09:49.317 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:09:49.317 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.317 | 99.99th=[41157] 00:09:49.317 write: IOPS=1492, BW=5971KiB/s (6114kB/s)(6144KiB/1029msec); 0 zone resets 00:09:49.317 slat (nsec): min=9937, max=43764, avg=11793.12, stdev=3431.10 00:09:49.317 clat (usec): min=123, max=599, avg=177.45, stdev=38.39 00:09:49.317 lat (usec): min=134, max=610, avg=189.24, stdev=38.79 00:09:49.317 clat percentiles (usec): 00:09:49.317 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:09:49.317 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 174], 00:09:49.317 | 70.00th=[ 182], 80.00th=[ 196], 90.00th=[ 241], 95.00th=[ 253], 00:09:49.317 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 404], 99.95th=[ 603], 00:09:49.317 | 99.99th=[ 603] 00:09:49.317 bw ( KiB/s): min= 1344, max=10944, per=31.17%, avg=6144.00, stdev=6788.23, samples=2 00:09:49.317 iops : min= 336, max= 2736, avg=1536.00, stdev=1697.06, samples=2 00:09:49.317 lat (usec) : 250=87.64%, 500=11.82%, 750=0.08% 00:09:49.317 lat (msec) : 50=0.47% 00:09:49.317 cpu : usr=2.63%, sys=3.02%, ctx=2572, majf=0, minf=2 00:09:49.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.317 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.318 job2: (groupid=0, jobs=1): err= 0: pid=330325: Wed Nov 20 12:19:32 2024 00:09:49.318 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:09:49.318 slat (nsec): min=9443, max=27311, avg=23096.27, stdev=3225.46 00:09:49.318 clat (usec): min=40886, max=41116, avg=40974.67, stdev=53.67 00:09:49.318 lat (usec): min=40910, max=41126, avg=40997.76, stdev=52.15 00:09:49.318 clat percentiles (usec): 00:09:49.318 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:49.318 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:49.318 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:49.318 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:49.318 | 99.99th=[41157] 00:09:49.318 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:49.318 slat (nsec): min=9649, max=38464, avg=10498.87, stdev=1494.74 00:09:49.318 clat (usec): min=155, max=335, avg=191.10, stdev=17.68 00:09:49.318 lat (usec): min=165, max=373, avg=201.60, stdev=18.18 00:09:49.318 clat percentiles (usec): 00:09:49.318 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:09:49.318 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:09:49.318 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 00:09:49.318 | 99.00th=[ 235], 99.50th=[ 255], 99.90th=[ 334], 99.95th=[ 334], 00:09:49.318 | 99.99th=[ 334] 00:09:49.318 bw ( KiB/s): min= 4096, max= 4096, per=20.78%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.318 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.318 lat (usec) : 250=95.32%, 500=0.56% 00:09:49.318 lat (msec) : 50=4.12% 00:09:49.318 cpu : usr=0.20%, sys=0.50%, ctx=535, majf=0, minf=1 00:09:49.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.318 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.318 job3: (groupid=0, jobs=1): err= 0: pid=330326: Wed Nov 20 12:19:32 2024 00:09:49.318 read: IOPS=2303, BW=9215KiB/s (9436kB/s)(9224KiB/1001msec) 00:09:49.318 slat (nsec): min=7270, max=39461, avg=8444.25, stdev=1336.59 00:09:49.318 clat (usec): min=176, max=406, avg=231.21, stdev=20.88 00:09:49.318 lat (usec): min=185, max=415, avg=239.65, stdev=20.88 00:09:49.318 clat percentiles (usec): 00:09:49.318 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:09:49.318 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:09:49.318 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 262], 00:09:49.318 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 408], 00:09:49.318 | 99.99th=[ 408] 00:09:49.318 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:49.318 slat (nsec): min=10002, max=41406, avg=11136.74, stdev=1659.98 00:09:49.318 clat (usec): min=114, max=304, avg=158.12, stdev=17.98 00:09:49.318 lat (usec): min=125, max=315, avg=169.26, stdev=18.20 00:09:49.318 clat percentiles (usec): 00:09:49.318 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:09:49.318 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:09:49.318 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:09:49.318 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 273], 99.95th=[ 297], 00:09:49.318 | 99.99th=[ 306] 00:09:49.318 bw ( KiB/s): min=10792, max=10792, per=54.75%, avg=10792.00, stdev= 0.00, samples=1 00:09:49.318 iops : min= 2698, max= 2698, avg=2698.00, stdev= 0.00, samples=1 00:09:49.318 lat (usec) : 250=91.12%, 500=8.88% 00:09:49.318 cpu : usr=4.70%, sys=7.00%, ctx=4866, majf=0, minf=2 00:09:49.318 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.318 issued rwts: total=2306,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.318 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.318 00:09:49.318 Run status group 0 (all jobs): 00:09:49.318 READ: bw=12.7MiB/s (13.4MB/s), 87.4KiB/s-9215KiB/s (89.5kB/s-9436kB/s), io=13.2MiB (13.9MB), run=1001-1039msec 00:09:49.318 WRITE: bw=19.2MiB/s (20.2MB/s), 1971KiB/s-9.99MiB/s (2018kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1039msec 00:09:49.318 00:09:49.318 Disk stats (read/write): 00:09:49.318 nvme0n1: ios=49/512, merge=0/0, ticks=1334/82, in_queue=1416, util=98.60% 00:09:49.318 nvme0n2: ios=1080/1536, merge=0/0, ticks=522/258, in_queue=780, util=88.12% 00:09:49.318 nvme0n3: ios=45/512, merge=0/0, ticks=1683/98, in_queue=1781, util=94.18% 00:09:49.318 nvme0n4: ios=2086/2048, merge=0/0, ticks=615/312, in_queue=927, util=99.69% 00:09:49.318 12:19:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:49.318 [global] 00:09:49.318 thread=1 00:09:49.318 invalidate=1 00:09:49.318 rw=write 00:09:49.318 time_based=1 00:09:49.318 runtime=1 00:09:49.318 ioengine=libaio 00:09:49.318 direct=1 00:09:49.318 bs=4096 00:09:49.318 iodepth=128 00:09:49.318 norandommap=0 00:09:49.318 numjobs=1 00:09:49.318 00:09:49.318 verify_dump=1 00:09:49.318 verify_backlog=512 00:09:49.318 verify_state_save=0 00:09:49.318 do_verify=1 00:09:49.318 verify=crc32c-intel 00:09:49.318 [job0] 00:09:49.318 filename=/dev/nvme0n1 00:09:49.318 [job1] 00:09:49.318 filename=/dev/nvme0n2 00:09:49.318 [job2] 00:09:49.318 filename=/dev/nvme0n3 00:09:49.318 [job3] 00:09:49.318 filename=/dev/nvme0n4 00:09:49.318 Could not set queue depth (nvme0n1) 00:09:49.318 Could not set queue depth (nvme0n2) 00:09:49.318 Could not set queue depth (nvme0n3) 00:09:49.318 Could not set queue depth (nvme0n4) 00:09:49.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.577 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.577 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.577 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.577 fio-3.35 00:09:49.577 Starting 4 threads 00:09:50.954 00:09:50.954 job0: (groupid=0, jobs=1): err= 0: pid=330692: Wed Nov 20 12:19:33 2024 00:09:50.954 read: IOPS=4226, BW=16.5MiB/s (17.3MB/s)(17.2MiB/1044msec) 00:09:50.954 slat (nsec): min=1103, max=17736k, avg=101009.58, stdev=751028.95 00:09:50.954 clat (usec): min=4382, max=58107, avg=15281.40, stdev=9094.43 00:09:50.954 lat (usec): min=4389, max=63005, avg=15382.41, stdev=9128.26 00:09:50.954 clat percentiles (usec): 00:09:50.954 | 1.00th=[ 5014], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10028], 00:09:50.954 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11863], 60.00th=[12387], 00:09:50.954 | 70.00th=[16712], 80.00th=[20055], 90.00th=[23987], 95.00th=[26870], 00:09:50.954 | 99.00th=[57410], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:09:50.954 | 99.99th=[57934] 00:09:50.954 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:09:50.954 slat (nsec): min=1911, max=33308k, avg=109719.53, stdev=883093.53 00:09:50.954 clat (usec): min=543, max=45319, avg=12784.86, stdev=6001.07 00:09:50.954 lat (usec): min=551, max=46662, avg=12894.58, stdev=6080.90 00:09:50.954 clat percentiles (usec): 00:09:50.954 | 1.00th=[ 3916], 5.00th=[ 5735], 10.00th=[ 7046], 20.00th=[ 8586], 00:09:50.954 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11863], 60.00th=[12125], 00:09:50.954 | 70.00th=[12387], 80.00th=[17171], 90.00th=[19792], 95.00th=[22676], 00:09:50.954 | 99.00th=[33817], 99.50th=[39060], 99.90th=[44827], 99.95th=[45351], 00:09:50.954 | 99.99th=[45351] 00:09:50.954 bw ( KiB/s): min=16384, max=20480, per=26.82%, avg=18432.00, stdev=2896.31, samples=2 00:09:50.954 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:50.954 lat (usec) : 750=0.03% 00:09:50.954 lat (msec) : 2=0.09%, 4=0.40%, 10=21.88%, 20=62.83%, 50=14.07% 00:09:50.954 lat (msec) : 100=0.70% 00:09:50.954 cpu : usr=2.88%, sys=4.12%, ctx=403, majf=0, minf=1 00:09:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.954 issued rwts: total=4412,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.954 job1: (groupid=0, jobs=1): err= 0: pid=330693: Wed Nov 20 12:19:33 2024 00:09:50.954 read: IOPS=4162, BW=16.3MiB/s (17.0MB/s)(16.3MiB/1003msec) 00:09:50.954 slat (nsec): min=1013, max=9894.4k, avg=104014.45, stdev=648904.04 00:09:50.954 clat (usec): min=1006, max=32874, avg=12722.65, stdev=4800.14 00:09:50.954 lat (usec): min=1518, max=32883, avg=12826.67, stdev=4854.12 00:09:50.954 clat percentiles (usec): 00:09:50.954 | 1.00th=[ 3720], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 8356], 00:09:50.954 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:09:50.954 | 70.00th=[15008], 80.00th=[16450], 90.00th=[18744], 95.00th=[20579], 00:09:50.954 | 99.00th=[25560], 99.50th=[28181], 99.90th=[32900], 99.95th=[32900], 00:09:50.954 | 99.99th=[32900] 00:09:50.954 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:50.954 slat (nsec): min=1778, max=11359k, avg=116754.92, stdev=658972.89 00:09:50.954 clat (usec): min=1517, max=57170, avg=16088.11, stdev=10444.35 00:09:50.954 lat (usec): min=1532, max=57182, avg=16204.86, stdev=10526.19 00:09:50.954 clat percentiles (usec): 00:09:50.954 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5932], 20.00th=[10683], 00:09:50.954 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[13698], 00:09:50.954 | 70.00th=[15664], 80.00th=[19530], 90.00th=[30278], 95.00th=[43254], 00:09:50.954 | 99.00th=[50594], 99.50th=[52691], 99.90th=[56886], 99.95th=[57410], 00:09:50.954 | 99.99th=[57410] 00:09:50.954 bw ( KiB/s): min=11904, max=24576, per=26.54%, avg=18240.00, stdev=8960.46, samples=2 00:09:50.954 iops : min= 2976, max= 6144, avg=4560.00, stdev=2240.11, samples=2 00:09:50.954 lat (msec) : 2=0.17%, 4=1.32%, 10=18.71%, 20=68.48%, 50=10.66% 00:09:50.954 lat (msec) : 100=0.66% 00:09:50.954 cpu : usr=3.39%, sys=5.09%, ctx=477, majf=0, minf=2 00:09:50.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:50.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.954 issued rwts: total=4175,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.954 job2: (groupid=0, jobs=1): err= 0: pid=330694: Wed Nov 20 12:19:33 2024 00:09:50.954 read: IOPS=3586, BW=14.0MiB/s (14.7MB/s)(14.6MiB/1044msec) 00:09:50.954 slat (nsec): min=1170, max=14336k, avg=106763.63, stdev=709424.62 00:09:50.954 clat (usec): min=2376, max=54108, avg=14998.52, stdev=7321.74 00:09:50.954 lat (usec): min=2382, max=54110, avg=15105.28, stdev=7338.08 00:09:50.954 clat percentiles (usec): 00:09:50.954 | 1.00th=[ 4752], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[11600], 00:09:50.954 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13960], 60.00th=[14746], 00:09:50.954 | 70.00th=[15533], 80.00th=[16581], 90.00th=[17957], 95.00th=[23462], 00:09:50.954 | 99.00th=[51119], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:09:50.954 | 99.99th=[54264] 00:09:50.955 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:09:50.955 slat (nsec): min=1960, max=42154k, avg=140351.36, stdev=990073.67 00:09:50.955 clat (usec): min=5406, max=53310, avg=16312.61, stdev=7442.23 00:09:50.955 lat (usec): min=5414, max=81518, avg=16452.97, stdev=7565.19 00:09:50.955 clat percentiles (usec): 00:09:50.955 | 1.00th=[ 7963], 5.00th=[11207], 10.00th=[11600], 20.00th=[11863], 00:09:50.955 | 30.00th=[12125], 40.00th=[13042], 50.00th=[13960], 60.00th=[15008], 00:09:50.955 | 70.00th=[15795], 80.00th=[19268], 90.00th=[23725], 95.00th=[30540], 00:09:50.955 | 99.00th=[49021], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:09:50.955 | 99.99th=[53216] 00:09:50.955 bw ( KiB/s): min=14952, max=17816, per=23.84%, avg=16384.00, stdev=2025.15, samples=2 00:09:50.955 iops : min= 3738, max= 4454, avg=4096.00, stdev=506.29, samples=2 00:09:50.955 lat (msec) : 4=0.28%, 10=5.97%, 20=80.52%, 50=12.14%, 100=1.08% 00:09:50.955 cpu : usr=2.49%, sys=4.31%, ctx=453, majf=0, minf=1 00:09:50.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.955 issued rwts: total=3744,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.955 job3: (groupid=0, jobs=1): err= 0: pid=330695: Wed Nov 20 12:19:33 2024 00:09:50.955 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:09:50.955 slat (nsec): min=1435, max=12586k, avg=104542.79, stdev=779339.07 00:09:50.955 clat (usec): min=3596, max=25791, avg=12966.66, stdev=3112.84 00:09:50.955 lat (usec): min=3606, max=27285, avg=13071.21, stdev=3184.31 00:09:50.955 clat percentiles (usec): 00:09:50.955 | 1.00th=[ 5276], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:09:50.955 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12387], 60.00th=[13435], 00:09:50.955 | 70.00th=[13698], 80.00th=[14222], 90.00th=[17433], 95.00th=[19530], 00:09:50.955 | 99.00th=[22414], 99.50th=[23462], 99.90th=[25035], 99.95th=[25035], 00:09:50.955 | 99.99th=[25822] 00:09:50.955 write: IOPS=4596, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1006msec); 0 zone resets 00:09:50.955 slat (usec): min=2, max=46844, avg=103.94, stdev=1131.08 00:09:50.955 clat (usec): min=1563, max=118318, avg=11634.49, stdev=4908.47 00:09:50.955 lat (usec): min=1576, max=118362, avg=11738.43, stdev=5171.82 00:09:50.955 clat percentiles (msec): 00:09:50.955 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 11], 00:09:50.955 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:09:50.955 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 15], 00:09:50.955 | 99.00th=[ 20], 99.50th=[ 24], 99.90th=[ 93], 99.95th=[ 118], 00:09:50.955 | 99.99th=[ 118] 00:09:50.955 bw ( KiB/s): min=16384, max=20480, per=26.82%, avg=18432.00, stdev=2896.31, samples=2 00:09:50.955 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:50.955 lat (msec) : 2=0.06%, 4=0.82%, 10=11.88%, 20=84.59%, 50=2.56% 00:09:50.955 lat (msec) : 100=0.04%, 250=0.04% 00:09:50.955 cpu : usr=3.28%, sys=5.97%, ctx=480, majf=0, minf=1 00:09:50.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:50.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.955 issued rwts: total=4608,4624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.955 00:09:50.955 Run status group 0 (all jobs): 00:09:50.955 READ: bw=63.4MiB/s (66.5MB/s), 14.0MiB/s-17.9MiB/s (14.7MB/s-18.8MB/s), io=66.2MiB (69.4MB), run=1003-1044msec 00:09:50.955 WRITE: bw=67.1MiB/s (70.4MB/s), 15.3MiB/s-18.0MiB/s (16.1MB/s-18.8MB/s), io=70.1MiB (73.5MB), run=1003-1044msec 00:09:50.955 00:09:50.955 Disk stats (read/write): 00:09:50.955 nvme0n1: ios=4024/4096, merge=0/0, ticks=35493/30126, in_queue=65619, util=90.38% 00:09:50.955 nvme0n2: ios=3759/4096, merge=0/0, ticks=22023/32364, in_queue=54387, util=91.37% 00:09:50.955 nvme0n3: ios=3132/3249, merge=0/0, ticks=29014/35627, in_queue=64641, util=98.96% 00:09:50.955 nvme0n4: ios=3619/3975, merge=0/0, ticks=46467/44683, in_queue=91150, util=98.12% 00:09:50.955 12:19:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:50.955 [global] 00:09:50.955 thread=1 00:09:50.955 invalidate=1 00:09:50.955 rw=randwrite 00:09:50.955 time_based=1 00:09:50.955 runtime=1 00:09:50.955 ioengine=libaio 00:09:50.955 direct=1 00:09:50.955 bs=4096 00:09:50.955 iodepth=128 00:09:50.955 norandommap=0 00:09:50.955 numjobs=1 00:09:50.955 00:09:50.955 verify_dump=1 00:09:50.955 verify_backlog=512 00:09:50.955 verify_state_save=0 00:09:50.955 do_verify=1 00:09:50.955 verify=crc32c-intel 00:09:50.955 [job0] 00:09:50.955 filename=/dev/nvme0n1 00:09:50.955 [job1] 00:09:50.955 filename=/dev/nvme0n2 00:09:50.955 [job2] 00:09:50.955 filename=/dev/nvme0n3 00:09:50.955 [job3] 00:09:50.955 filename=/dev/nvme0n4 00:09:50.955 Could not set queue depth (nvme0n1) 00:09:50.955 Could not set queue depth (nvme0n2) 00:09:50.955 Could not set queue depth (nvme0n3) 00:09:50.955 Could not set queue depth (nvme0n4) 00:09:51.214 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.214 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.214 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.214 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.214 fio-3.35 00:09:51.214 Starting 4 threads 00:09:52.590 00:09:52.591 job0: (groupid=0, jobs=1): err= 0: pid=331072: Wed Nov 20 12:19:35 2024 00:09:52.591 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:09:52.591 slat (nsec): min=1120, max=29496k, avg=159964.76, stdev=1159401.79 00:09:52.591 clat (usec): min=4921, max=74599, avg=21433.48, stdev=12357.55 00:09:52.591 lat (usec): min=4926, max=74622, avg=21593.44, stdev=12447.83 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 7439], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11076], 00:09:52.591 | 30.00th=[11994], 40.00th=[14484], 50.00th=[15926], 60.00th=[18220], 00:09:52.591 | 70.00th=[29754], 80.00th=[33817], 90.00th=[37487], 95.00th=[42730], 00:09:52.591 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[68682], 00:09:52.591 | 99.99th=[74974] 00:09:52.591 write: IOPS=3590, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1005msec); 0 zone resets 00:09:52.591 slat (nsec): min=1866, max=10586k, avg=112705.02, stdev=659916.40 00:09:52.591 clat (usec): min=1769, max=40957, avg=13818.04, stdev=4770.77 00:09:52.591 lat (usec): min=3921, max=40964, avg=13930.75, stdev=4822.60 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 5407], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:09:52.591 | 30.00th=[10945], 40.00th=[11731], 50.00th=[13173], 60.00th=[13566], 00:09:52.591 | 70.00th=[14091], 80.00th=[16057], 90.00th=[20841], 95.00th=[22152], 00:09:52.591 | 99.00th=[28181], 99.50th=[30802], 99.90th=[40633], 99.95th=[41157], 00:09:52.591 | 99.99th=[41157] 00:09:52.591 bw ( KiB/s): min=10768, max=17904, per=21.07%, avg=14336.00, stdev=5045.91, samples=2 00:09:52.591 iops : min= 2692, max= 4476, avg=3584.00, stdev=1261.48, samples=2 00:09:52.591 lat (msec) : 2=0.01%, 4=0.18%, 10=8.08%, 20=67.52%, 50=22.78% 00:09:52.591 lat (msec) : 100=1.43% 00:09:52.591 cpu : usr=1.89%, sys=4.28%, ctx=317, majf=0, minf=1 00:09:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.591 issued rwts: total=3584,3608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.591 job1: (groupid=0, jobs=1): err= 0: pid=331073: Wed Nov 20 12:19:35 2024 00:09:52.591 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:09:52.591 slat (nsec): min=1409, max=15998k, avg=97496.18, stdev=747485.45 00:09:52.591 clat (usec): min=3674, max=35088, avg=12491.51, stdev=3380.24 00:09:52.591 lat (usec): min=3680, max=35090, avg=12589.01, stdev=3449.87 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10290], 00:09:52.591 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:09:52.591 | 70.00th=[13173], 80.00th=[14222], 90.00th=[16909], 95.00th=[17695], 00:09:52.591 | 99.00th=[28443], 99.50th=[30016], 99.90th=[34866], 99.95th=[34866], 00:09:52.591 | 99.99th=[34866] 00:09:52.591 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(19.3MiB/1008msec); 0 zone resets 00:09:52.591 slat (usec): min=2, max=20475, avg=96.84, stdev=671.04 00:09:52.591 clat (usec): min=400, max=53208, avg=14160.74, stdev=9324.47 00:09:52.591 lat (usec): min=413, max=53217, avg=14257.57, stdev=9390.09 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 963], 5.00th=[ 4293], 10.00th=[ 6521], 20.00th=[ 8717], 00:09:52.591 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:09:52.591 | 70.00th=[12649], 80.00th=[18744], 90.00th=[30278], 95.00th=[35914], 00:09:52.591 | 99.00th=[43254], 99.50th=[47449], 99.90th=[53216], 99.95th=[53216], 00:09:52.591 | 99.99th=[53216] 00:09:52.591 bw ( KiB/s): min=16624, max=21832, per=28.26%, avg=19228.00, stdev=3682.61, samples=2 00:09:52.591 iops : min= 4156, max= 5458, avg=4807.00, stdev=920.65, samples=2 00:09:52.591 lat (usec) : 500=0.03%, 750=0.04%, 1000=0.56% 00:09:52.591 lat (msec) : 2=0.35%, 4=1.33%, 10=20.36%, 20=66.59%, 50=10.49% 00:09:52.591 lat (msec) : 100=0.25% 00:09:52.591 cpu : usr=3.48%, sys=6.55%, ctx=407, majf=0, minf=1 00:09:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.591 issued rwts: total=4608,4934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.591 job2: (groupid=0, jobs=1): err= 0: pid=331074: Wed Nov 20 12:19:35 2024 00:09:52.591 read: IOPS=4104, BW=16.0MiB/s (16.8MB/s)(16.7MiB/1044msec) 00:09:52.591 slat (nsec): min=1150, max=8807.0k, avg=99092.90, stdev=630946.13 00:09:52.591 clat (usec): min=6976, max=52437, avg=13919.25, stdev=7197.36 00:09:52.591 lat (usec): min=7564, max=57701, avg=14018.34, stdev=7220.50 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11469], 00:09:52.591 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:09:52.591 | 70.00th=[12387], 80.00th=[13829], 90.00th=[18220], 95.00th=[26346], 00:09:52.591 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:09:52.591 | 99.99th=[52691] 00:09:52.591 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:09:52.591 slat (nsec): min=1929, max=25136k, avg=119287.21, stdev=941476.49 00:09:52.591 clat (usec): min=6614, max=66840, avg=15333.84, stdev=9416.73 00:09:52.591 lat (usec): min=6620, max=66872, avg=15453.13, stdev=9499.59 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 7635], 5.00th=[10290], 10.00th=[10945], 20.00th=[11338], 00:09:52.591 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:09:52.591 | 70.00th=[12518], 80.00th=[15401], 90.00th=[26084], 95.00th=[42206], 00:09:52.591 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:09:52.591 | 99.99th=[66847] 00:09:52.591 bw ( KiB/s): min=16384, max=20480, per=27.09%, avg=18432.00, stdev=2896.31, samples=2 00:09:52.591 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:52.591 lat (msec) : 10=6.87%, 20=84.20%, 50=6.34%, 100=2.59% 00:09:52.591 cpu : usr=2.68%, sys=4.89%, ctx=414, majf=0, minf=1 00:09:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.591 issued rwts: total=4285,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.591 job3: (groupid=0, jobs=1): err= 0: pid=331075: Wed Nov 20 12:19:35 2024 00:09:52.591 read: IOPS=4139, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1005msec) 00:09:52.591 slat (nsec): min=1106, max=11364k, avg=120046.68, stdev=763326.04 00:09:52.591 clat (usec): min=1975, max=38451, avg=14771.26, stdev=4552.79 00:09:52.591 lat (usec): min=4214, max=38457, avg=14891.30, stdev=4621.29 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 5276], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11863], 00:09:52.591 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13566], 60.00th=[14746], 00:09:52.591 | 70.00th=[15795], 80.00th=[16909], 90.00th=[20579], 95.00th=[23987], 00:09:52.591 | 99.00th=[32900], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:09:52.591 | 99.99th=[38536] 00:09:52.591 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:52.591 slat (usec): min=2, max=10371, avg=98.40, stdev=618.72 00:09:52.591 clat (usec): min=2651, max=46170, avg=14281.99, stdev=5745.71 00:09:52.591 lat (usec): min=2665, max=46899, avg=14380.39, stdev=5793.63 00:09:52.591 clat percentiles (usec): 00:09:52.591 | 1.00th=[ 5080], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[11338], 00:09:52.591 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12911], 60.00th=[13698], 00:09:52.591 | 70.00th=[14877], 80.00th=[16581], 90.00th=[18744], 95.00th=[25560], 00:09:52.591 | 99.00th=[38011], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:09:52.591 | 99.99th=[46400] 00:09:52.591 bw ( KiB/s): min=15872, max=20480, per=26.71%, avg=18176.00, stdev=3258.35, samples=2 00:09:52.591 iops : min= 3968, max= 5120, avg=4544.00, stdev=814.59, samples=2 00:09:52.591 lat (msec) : 2=0.01%, 4=0.25%, 10=8.69%, 20=81.56%, 50=9.49% 00:09:52.591 cpu : usr=3.88%, sys=5.18%, ctx=414, majf=0, minf=2 00:09:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.591 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.591 00:09:52.591 Run status group 0 (all jobs): 00:09:52.591 READ: bw=62.2MiB/s (65.3MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.7MB/s), io=65.0MiB (68.1MB), run=1005-1044msec 00:09:52.591 WRITE: bw=66.4MiB/s (69.7MB/s), 14.0MiB/s-19.1MiB/s (14.7MB/s-20.0MB/s), io=69.4MiB (72.7MB), run=1005-1044msec 00:09:52.591 00:09:52.591 Disk stats (read/write): 00:09:52.591 nvme0n1: ios=3105/3359, merge=0/0, ticks=22715/15380, in_queue=38095, util=96.09% 00:09:52.591 nvme0n2: ios=3604/4095, merge=0/0, ticks=44952/57515, in_queue=102467, util=95.03% 00:09:52.591 nvme0n3: ios=3644/3967, merge=0/0, ticks=22662/27502, in_queue=50164, util=99.79% 00:09:52.591 nvme0n4: ios=3605/3855, merge=0/0, ticks=32826/32722, in_queue=65548, util=96.43% 00:09:52.591 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:52.591 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=331301 00:09:52.591 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:52.591 12:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:52.591 [global] 00:09:52.591 thread=1 00:09:52.591 invalidate=1 00:09:52.591 rw=read 00:09:52.591 time_based=1 00:09:52.591 runtime=10 00:09:52.591 ioengine=libaio 00:09:52.591 direct=1 00:09:52.591 bs=4096 00:09:52.591 iodepth=1 00:09:52.591 norandommap=1 00:09:52.591 numjobs=1 00:09:52.591 00:09:52.591 [job0] 00:09:52.591 filename=/dev/nvme0n1 00:09:52.591 [job1] 00:09:52.591 filename=/dev/nvme0n2 00:09:52.591 [job2] 00:09:52.591 filename=/dev/nvme0n3 00:09:52.591 [job3] 00:09:52.591 filename=/dev/nvme0n4 00:09:52.592 Could not set queue depth (nvme0n1) 00:09:52.592 Could not set queue depth (nvme0n2) 00:09:52.592 Could not set queue depth (nvme0n3) 00:09:52.592 Could not set queue depth (nvme0n4) 00:09:52.851 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.851 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.851 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.851 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.851 fio-3.35 00:09:52.851 Starting 4 threads 00:09:55.382 12:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:55.640 12:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:55.640 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=421888, buflen=4096 00:09:55.640 fio: pid=331450, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:55.899 12:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.899 12:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:55.899 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:09:55.899 fio: pid=331449, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:56.158 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.158 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:56.158 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:09:56.158 fio: pid=331447, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:56.418 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.418 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:56.418 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=339968, buflen=4096 00:09:56.418 fio: pid=331448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:56.418 00:09:56.418 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=331447: Wed Nov 20 12:19:39 2024 00:09:56.418 read: IOPS=24, BW=98.0KiB/s (100kB/s)(308KiB/3143msec) 00:09:56.418 slat (usec): min=14, max=2751, avg=58.07, stdev=308.97 00:09:56.418 clat (usec): min=560, max=42067, avg=40470.67, stdev=4611.50 00:09:56.418 lat (usec): min=616, max=43996, avg=40529.22, stdev=4624.13 00:09:56.418 clat percentiles (usec): 00:09:56.418 | 1.00th=[ 562], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:56.418 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:56.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:56.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:56.418 | 99.99th=[42206] 00:09:56.418 bw ( KiB/s): min= 96, max= 104, per=24.20%, avg=97.83, stdev= 3.25, samples=6 00:09:56.418 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:09:56.418 lat (usec) : 750=1.28% 00:09:56.418 lat (msec) : 50=97.44% 00:09:56.418 cpu : usr=0.13%, sys=0.00%, ctx=81, majf=0, minf=1 00:09:56.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.418 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.418 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.418 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=331448: Wed Nov 20 12:19:39 2024 00:09:56.418 read: IOPS=25, BW=99.0KiB/s (101kB/s)(332KiB/3353msec) 00:09:56.418 slat (usec): min=10, max=5825, avg=92.17, stdev=633.16 00:09:56.418 clat (usec): min=484, max=42048, avg=40050.38, stdev=6240.00 00:09:56.418 lat (usec): min=562, max=47142, avg=40143.39, stdev=6282.13 00:09:56.418 clat percentiles (usec): 00:09:56.418 | 1.00th=[ 486], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:56.418 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:56.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:56.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:56.418 | 99.99th=[42206] 00:09:56.418 bw ( KiB/s): min= 93, max= 112, per=24.70%, avg=99.50, stdev= 7.15, samples=6 00:09:56.418 iops : min= 23, max= 28, avg=24.83, stdev= 1.83, samples=6 00:09:56.418 lat (usec) : 500=1.19%, 750=1.19% 00:09:56.418 lat (msec) : 50=96.43% 00:09:56.418 cpu : usr=0.12%, sys=0.00%, ctx=88, majf=0, minf=2 00:09:56.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.418 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.418 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.418 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=331449: Wed Nov 20 12:19:39 2024 00:09:56.418 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2936msec) 00:09:56.418 slat (nsec): min=9418, max=77194, avg=24086.68, stdev=6860.95 00:09:56.418 clat (usec): min=280, max=41979, avg=39890.23, stdev=6655.30 00:09:56.418 lat (usec): min=303, max=42004, avg=39914.32, stdev=6654.27 00:09:56.418 clat percentiles (usec): 00:09:56.418 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:56.418 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:56.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:56.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:56.418 | 99.99th=[42206] 00:09:56.418 bw ( KiB/s): min= 96, max= 104, per=24.95%, avg=100.80, stdev= 4.38, samples=5 00:09:56.418 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:09:56.418 lat (usec) : 500=1.35%, 1000=1.35% 00:09:56.419 lat (msec) : 50=95.95% 00:09:56.419 cpu : usr=0.00%, sys=0.10%, ctx=77, majf=0, minf=2 00:09:56.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.419 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.419 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.419 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=331450: Wed Nov 20 12:19:39 2024 00:09:56.419 read: IOPS=37, BW=150KiB/s (154kB/s)(412KiB/2741msec) 00:09:56.419 slat (nsec): min=7329, max=71954, avg=18686.38, stdev=8535.06 00:09:56.419 clat (usec): min=217, max=42483, avg=26379.02, stdev=19636.51 00:09:56.419 lat (usec): min=233, max=42490, avg=26397.67, stdev=19635.64 00:09:56.419 clat percentiles (usec): 00:09:56.419 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 245], 20.00th=[ 265], 00:09:56.419 | 30.00th=[ 322], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:09:56.419 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:56.419 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:56.419 | 99.99th=[42730] 00:09:56.419 bw ( KiB/s): min= 104, max= 208, per=36.17%, avg=145.60, stdev=42.10, samples=5 00:09:56.419 iops : min= 26, max= 52, avg=36.40, stdev=10.53, samples=5 00:09:56.419 lat (usec) : 250=16.35%, 500=18.27%, 1000=0.96% 00:09:56.419 lat (msec) : 50=63.46% 00:09:56.419 cpu : usr=0.00%, sys=0.11%, ctx=105, majf=0, minf=2 00:09:56.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.419 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.419 issued rwts: total=104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.419 00:09:56.419 Run status group 0 (all jobs): 00:09:56.419 READ: bw=401KiB/s (410kB/s), 98.0KiB/s-150KiB/s (100kB/s-154kB/s), io=1344KiB (1376kB), run=2741-3353msec 00:09:56.419 00:09:56.419 Disk stats (read/write): 00:09:56.419 nvme0n1: ios=76/0, merge=0/0, ticks=3078/0, in_queue=3078, util=95.69% 00:09:56.419 nvme0n2: ios=77/0, merge=0/0, ticks=3079/0, in_queue=3079, util=95.95% 00:09:56.419 nvme0n3: ios=114/0, merge=0/0, ticks=3918/0, in_queue=3918, util=99.02% 00:09:56.419 nvme0n4: ios=96/0, merge=0/0, ticks=2592/0, in_queue=2592, util=96.45% 00:09:56.419 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.419 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:56.678 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.678 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:56.936 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.936 12:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:57.195 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:57.195 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 331301 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:57.454 nvmf hotplug test: fio failed as expected 00:09:57.454 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.713 rmmod nvme_tcp 00:09:57.713 rmmod nvme_fabrics 00:09:57.713 rmmod nvme_keyring 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 328536 ']' 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 328536 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 328536 ']' 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 328536 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 328536 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 328536' 00:09:57.713 killing process with pid 328536 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 328536 00:09:57.713 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 328536 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.972 12:19:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.507 00:10:00.507 real 0m26.856s 00:10:00.507 user 1m46.748s 00:10:00.507 sys 0m7.926s 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.507 ************************************ 00:10:00.507 END TEST nvmf_fio_target 00:10:00.507 ************************************ 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.507 ************************************ 00:10:00.507 START TEST nvmf_bdevio 00:10:00.507 ************************************ 00:10:00.507 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:00.508 * Looking for test storage... 00:10:00.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.508 --rc genhtml_branch_coverage=1 00:10:00.508 --rc genhtml_function_coverage=1 00:10:00.508 --rc genhtml_legend=1 00:10:00.508 --rc geninfo_all_blocks=1 00:10:00.508 --rc geninfo_unexecuted_blocks=1 00:10:00.508 00:10:00.508 ' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.508 --rc genhtml_branch_coverage=1 00:10:00.508 --rc genhtml_function_coverage=1 00:10:00.508 --rc genhtml_legend=1 00:10:00.508 --rc geninfo_all_blocks=1 00:10:00.508 --rc geninfo_unexecuted_blocks=1 00:10:00.508 00:10:00.508 ' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.508 --rc genhtml_branch_coverage=1 00:10:00.508 --rc genhtml_function_coverage=1 00:10:00.508 --rc genhtml_legend=1 00:10:00.508 --rc geninfo_all_blocks=1 00:10:00.508 --rc geninfo_unexecuted_blocks=1 00:10:00.508 00:10:00.508 ' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.508 --rc genhtml_branch_coverage=1 00:10:00.508 --rc genhtml_function_coverage=1 00:10:00.508 --rc genhtml_legend=1 00:10:00.508 --rc geninfo_all_blocks=1 00:10:00.508 --rc geninfo_unexecuted_blocks=1 00:10:00.508 00:10:00.508 ' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.508 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.509 12:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:07.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:07.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:07.083 Found net devices under 0000:86:00.0: cvl_0_0 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:07.083 Found net devices under 0000:86:00.1: cvl_0_1 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.083 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:10:07.084 00:10:07.084 --- 10.0.0.2 ping statistics --- 00:10:07.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.084 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:07.084 00:10:07.084 --- 10.0.0.1 ping statistics --- 00:10:07.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.084 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=335915 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 335915 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 335915 ']' 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 [2024-11-20 12:19:49.362239] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:07.084 [2024-11-20 12:19:49.362294] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.084 [2024-11-20 12:19:49.443464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.084 [2024-11-20 12:19:49.485824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.084 [2024-11-20 12:19:49.485863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.084 [2024-11-20 12:19:49.485870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.084 [2024-11-20 12:19:49.485877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.084 [2024-11-20 12:19:49.485882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.084 [2024-11-20 12:19:49.487550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:07.084 [2024-11-20 12:19:49.487660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:07.084 [2024-11-20 12:19:49.487765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.084 [2024-11-20 12:19:49.487766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 [2024-11-20 12:19:49.624373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 Malloc0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.084 [2024-11-20 12:19:49.689283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:07.084 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:07.084 { 00:10:07.084 "params": { 00:10:07.084 "name": "Nvme$subsystem", 00:10:07.084 "trtype": "$TEST_TRANSPORT", 00:10:07.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.084 "adrfam": "ipv4", 00:10:07.084 "trsvcid": "$NVMF_PORT", 00:10:07.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.084 "hdgst": ${hdgst:-false}, 00:10:07.084 "ddgst": ${ddgst:-false} 00:10:07.085 }, 00:10:07.085 "method": "bdev_nvme_attach_controller" 00:10:07.085 } 00:10:07.085 EOF 00:10:07.085 )") 00:10:07.085 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:07.085 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:07.085 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:07.085 12:19:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:07.085 "params": { 00:10:07.085 "name": "Nvme1", 00:10:07.085 "trtype": "tcp", 00:10:07.085 "traddr": "10.0.0.2", 00:10:07.085 "adrfam": "ipv4", 00:10:07.085 "trsvcid": "4420", 00:10:07.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.085 "hdgst": false, 00:10:07.085 "ddgst": false 00:10:07.085 }, 00:10:07.085 "method": "bdev_nvme_attach_controller" 00:10:07.085 }' 00:10:07.085 [2024-11-20 12:19:49.740926] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:07.085 [2024-11-20 12:19:49.740975] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335938 ] 00:10:07.085 [2024-11-20 12:19:49.818498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.085 [2024-11-20 12:19:49.862681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.085 [2024-11-20 12:19:49.862791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.085 [2024-11-20 12:19:49.862791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.085 I/O targets: 00:10:07.085 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:07.085 00:10:07.085 00:10:07.085 CUnit - A unit testing framework for C - Version 2.1-3 00:10:07.085 http://cunit.sourceforge.net/ 00:10:07.085 00:10:07.085 00:10:07.085 Suite: bdevio tests on: Nvme1n1 00:10:07.085 Test: blockdev write read block ...passed 00:10:07.344 Test: blockdev write zeroes read block ...passed 00:10:07.344 Test: blockdev write zeroes read no split ...passed 00:10:07.344 Test: blockdev write zeroes read split ...passed 00:10:07.344 Test: blockdev write zeroes read split partial ...passed 00:10:07.344 Test: blockdev reset ...[2024-11-20 12:19:50.294968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:07.344 [2024-11-20 12:19:50.295038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd70340 (9): Bad file descriptor 00:10:07.344 [2024-11-20 12:19:50.397571] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:07.344 passed 00:10:07.344 Test: blockdev write read 8 blocks ...passed 00:10:07.344 Test: blockdev write read size > 128k ...passed 00:10:07.344 Test: blockdev write read invalid size ...passed 00:10:07.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:07.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:07.344 Test: blockdev write read max offset ...passed 00:10:07.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:07.602 Test: blockdev writev readv 8 blocks ...passed 00:10:07.602 Test: blockdev writev readv 30 x 1block ...passed 00:10:07.602 Test: blockdev writev readv block ...passed 00:10:07.602 Test: blockdev writev readv size > 128k ...passed 00:10:07.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:07.602 Test: blockdev comparev and writev ...[2024-11-20 12:19:50.570799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.602 [2024-11-20 12:19:50.570829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:07.602 [2024-11-20 12:19:50.570843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.602 [2024-11-20 12:19:50.570851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:07.602 [2024-11-20 12:19:50.571099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.602 [2024-11-20 12:19:50.571111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:07.602 [2024-11-20 12:19:50.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.602 [2024-11-20 12:19:50.571130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.571371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.603 [2024-11-20 12:19:50.571383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.571396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.603 [2024-11-20 12:19:50.571403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.571643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.603 [2024-11-20 12:19:50.571654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.571667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:07.603 [2024-11-20 12:19:50.571676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:07.603 passed 00:10:07.603 Test: blockdev nvme passthru rw ...passed 00:10:07.603 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:19:50.654283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.603 [2024-11-20 12:19:50.654299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.654406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.603 [2024-11-20 12:19:50.654417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.654518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.603 [2024-11-20 12:19:50.654529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:07.603 [2024-11-20 12:19:50.654630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:07.603 [2024-11-20 12:19:50.654641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:07.603 passed 00:10:07.603 Test: blockdev nvme admin passthru ...passed 00:10:07.603 Test: blockdev copy ...passed 00:10:07.603 00:10:07.603 Run Summary: Type Total Ran Passed Failed Inactive 00:10:07.603 suites 1 1 n/a 0 0 00:10:07.603 tests 23 23 23 0 0 00:10:07.603 asserts 152 152 152 0 n/a 00:10:07.603 00:10:07.603 Elapsed time = 1.140 seconds 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.862 rmmod nvme_tcp 00:10:07.862 rmmod nvme_fabrics 00:10:07.862 rmmod nvme_keyring 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 335915 ']' 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 335915 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 335915 ']' 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 335915 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335915 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335915' 00:10:07.862 killing process with pid 335915 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 335915 00:10:07.862 12:19:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 335915 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.121 12:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.661 00:10:10.661 real 0m10.098s 00:10:10.661 user 0m10.614s 00:10:10.661 sys 0m5.066s 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.661 ************************************ 00:10:10.661 END TEST nvmf_bdevio 00:10:10.661 ************************************ 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:10.661 00:10:10.661 real 4m36.122s 00:10:10.661 user 10m18.533s 00:10:10.661 sys 1m37.507s 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.661 ************************************ 00:10:10.661 END TEST nvmf_target_core 00:10:10.661 ************************************ 00:10:10.661 12:19:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:10.661 12:19:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.661 12:19:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.661 12:19:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.661 ************************************ 00:10:10.661 START TEST nvmf_target_extra 00:10:10.661 ************************************ 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:10.661 * Looking for test storage... 00:10:10.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.661 --rc genhtml_branch_coverage=1 00:10:10.661 --rc genhtml_function_coverage=1 00:10:10.661 --rc genhtml_legend=1 00:10:10.661 --rc geninfo_all_blocks=1 00:10:10.661 --rc geninfo_unexecuted_blocks=1 00:10:10.661 00:10:10.661 ' 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.661 --rc genhtml_branch_coverage=1 00:10:10.661 --rc genhtml_function_coverage=1 00:10:10.661 --rc genhtml_legend=1 00:10:10.661 --rc geninfo_all_blocks=1 00:10:10.661 --rc geninfo_unexecuted_blocks=1 00:10:10.661 00:10:10.661 ' 00:10:10.661 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.661 --rc genhtml_branch_coverage=1 00:10:10.661 --rc genhtml_function_coverage=1 00:10:10.661 --rc genhtml_legend=1 00:10:10.662 --rc geninfo_all_blocks=1 00:10:10.662 --rc geninfo_unexecuted_blocks=1 00:10:10.662 00:10:10.662 ' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.662 --rc genhtml_branch_coverage=1 00:10:10.662 --rc genhtml_function_coverage=1 00:10:10.662 --rc genhtml_legend=1 00:10:10.662 --rc geninfo_all_blocks=1 00:10:10.662 --rc geninfo_unexecuted_blocks=1 00:10:10.662 00:10:10.662 ' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 ************************************ 00:10:10.662 START TEST nvmf_example 00:10:10.662 ************************************ 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:10.662 * Looking for test storage... 00:10:10.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.662 --rc genhtml_branch_coverage=1 00:10:10.662 --rc genhtml_function_coverage=1 00:10:10.662 --rc genhtml_legend=1 00:10:10.662 --rc geninfo_all_blocks=1 00:10:10.662 --rc geninfo_unexecuted_blocks=1 00:10:10.662 00:10:10.662 ' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.662 --rc genhtml_branch_coverage=1 00:10:10.662 --rc genhtml_function_coverage=1 00:10:10.662 --rc genhtml_legend=1 00:10:10.662 --rc geninfo_all_blocks=1 00:10:10.662 --rc geninfo_unexecuted_blocks=1 00:10:10.662 00:10:10.662 ' 00:10:10.662 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.662 --rc genhtml_branch_coverage=1 00:10:10.662 --rc genhtml_function_coverage=1 00:10:10.662 --rc genhtml_legend=1 00:10:10.662 --rc geninfo_all_blocks=1 00:10:10.663 --rc geninfo_unexecuted_blocks=1 00:10:10.663 00:10:10.663 ' 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.663 --rc genhtml_branch_coverage=1 00:10:10.663 --rc genhtml_function_coverage=1 00:10:10.663 --rc genhtml_legend=1 00:10:10.663 --rc geninfo_all_blocks=1 00:10:10.663 --rc geninfo_unexecuted_blocks=1 00:10:10.663 00:10:10.663 ' 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.663 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.922 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.922 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:10.922 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:10.922 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.923 12:19:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:17.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:17.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:17.493 Found net devices under 0000:86:00.0: cvl_0_0 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.493 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:17.494 Found net devices under 0000:86:00.1: cvl_0_1 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:10:17.494 00:10:17.494 --- 10.0.0.2 ping statistics --- 00:10:17.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.494 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:10:17.494 00:10:17.494 --- 10.0.0.1 ping statistics --- 00:10:17.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.494 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=339765 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 339765 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 339765 ']' 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.494 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.753 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.753 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:17.753 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:17.753 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.753 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:17.754 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:29.962 Initializing NVMe Controllers 00:10:29.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.962 Initialization complete. Launching workers. 00:10:29.962 ======================================================== 00:10:29.962 Latency(us) 00:10:29.962 Device Information : IOPS MiB/s Average min max 00:10:29.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18119.31 70.78 3531.46 594.23 15416.62 00:10:29.962 ======================================================== 00:10:29.962 Total : 18119.31 70.78 3531.46 594.23 15416.62 00:10:29.962 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.962 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.962 rmmod nvme_tcp 00:10:29.962 rmmod nvme_fabrics 00:10:29.962 rmmod nvme_keyring 00:10:29.962 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.962 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:29.962 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:29.962 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 339765 ']' 00:10:29.962 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 339765 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 339765 ']' 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 339765 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339765 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339765' 00:10:29.963 killing process with pid 339765 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 339765 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 339765 00:10:29.963 nvmf threads initialize successfully 00:10:29.963 bdev subsystem init successfully 00:10:29.963 created a nvmf target service 00:10:29.963 create targets's poll groups done 00:10:29.963 all subsystems of target started 00:10:29.963 nvmf target is running 00:10:29.963 all subsystems of target stopped 00:10:29.963 destroy targets's poll groups done 00:10:29.963 destroyed the nvmf target service 00:10:29.963 bdev subsystem finish successfully 00:10:29.963 nvmf threads destroy successfully 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.963 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.531 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:30.531 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:30.531 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.531 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.531 00:10:30.531 real 0m19.813s 00:10:30.531 user 0m45.882s 00:10:30.531 sys 0m6.109s 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:30.532 ************************************ 00:10:30.532 END TEST nvmf_example 00:10:30.532 ************************************ 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:30.532 ************************************ 00:10:30.532 START TEST nvmf_filesystem 00:10:30.532 ************************************ 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:30.532 * Looking for test storage... 00:10:30.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.532 --rc genhtml_branch_coverage=1 00:10:30.532 --rc genhtml_function_coverage=1 00:10:30.532 --rc genhtml_legend=1 00:10:30.532 --rc geninfo_all_blocks=1 00:10:30.532 --rc geninfo_unexecuted_blocks=1 00:10:30.532 00:10:30.532 ' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:30.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:30.794 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:30.794 #define SPDK_CONFIG_H 00:10:30.794 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:30.794 #define SPDK_CONFIG_APPS 1 00:10:30.794 #define SPDK_CONFIG_ARCH native 00:10:30.794 #undef SPDK_CONFIG_ASAN 00:10:30.794 #undef SPDK_CONFIG_AVAHI 00:10:30.794 #undef SPDK_CONFIG_CET 00:10:30.794 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:30.794 #define SPDK_CONFIG_COVERAGE 1 00:10:30.794 #define SPDK_CONFIG_CROSS_PREFIX 00:10:30.794 #undef SPDK_CONFIG_CRYPTO 00:10:30.794 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:30.794 #undef SPDK_CONFIG_CUSTOMOCF 00:10:30.794 #undef SPDK_CONFIG_DAOS 00:10:30.794 #define SPDK_CONFIG_DAOS_DIR 00:10:30.794 #define SPDK_CONFIG_DEBUG 1 00:10:30.794 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:30.794 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:30.794 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:30.794 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:30.794 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:30.794 #undef SPDK_CONFIG_DPDK_UADK 00:10:30.794 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:30.794 #define SPDK_CONFIG_EXAMPLES 1 00:10:30.794 #undef SPDK_CONFIG_FC 00:10:30.794 #define SPDK_CONFIG_FC_PATH 00:10:30.794 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:30.794 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:30.794 #define SPDK_CONFIG_FSDEV 1 00:10:30.794 #undef SPDK_CONFIG_FUSE 00:10:30.795 #undef SPDK_CONFIG_FUZZER 00:10:30.795 #define SPDK_CONFIG_FUZZER_LIB 00:10:30.795 #undef SPDK_CONFIG_GOLANG 00:10:30.795 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:30.795 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:30.795 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:30.795 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:30.795 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:30.795 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:30.795 #undef SPDK_CONFIG_HAVE_LZ4 00:10:30.795 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:30.795 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:30.795 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:30.795 #define SPDK_CONFIG_IDXD 1 00:10:30.795 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:30.795 #undef SPDK_CONFIG_IPSEC_MB 00:10:30.795 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:30.795 #define SPDK_CONFIG_ISAL 1 00:10:30.795 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:30.795 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:30.795 #define SPDK_CONFIG_LIBDIR 00:10:30.795 #undef SPDK_CONFIG_LTO 00:10:30.795 #define SPDK_CONFIG_MAX_LCORES 128 00:10:30.795 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:30.795 #define SPDK_CONFIG_NVME_CUSE 1 00:10:30.795 #undef SPDK_CONFIG_OCF 00:10:30.795 #define SPDK_CONFIG_OCF_PATH 00:10:30.795 #define SPDK_CONFIG_OPENSSL_PATH 00:10:30.795 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:30.795 #define SPDK_CONFIG_PGO_DIR 00:10:30.795 #undef SPDK_CONFIG_PGO_USE 00:10:30.795 #define SPDK_CONFIG_PREFIX /usr/local 00:10:30.795 #undef SPDK_CONFIG_RAID5F 00:10:30.795 #undef SPDK_CONFIG_RBD 00:10:30.795 #define SPDK_CONFIG_RDMA 1 00:10:30.795 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:30.795 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:30.795 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:30.795 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:30.795 #define SPDK_CONFIG_SHARED 1 00:10:30.795 #undef SPDK_CONFIG_SMA 00:10:30.795 #define SPDK_CONFIG_TESTS 1 00:10:30.795 #undef SPDK_CONFIG_TSAN 00:10:30.795 #define SPDK_CONFIG_UBLK 1 00:10:30.795 #define SPDK_CONFIG_UBSAN 1 00:10:30.795 #undef SPDK_CONFIG_UNIT_TESTS 00:10:30.795 #undef SPDK_CONFIG_URING 00:10:30.795 #define SPDK_CONFIG_URING_PATH 00:10:30.795 #undef SPDK_CONFIG_URING_ZNS 00:10:30.795 #undef SPDK_CONFIG_USDT 00:10:30.795 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:30.795 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:30.795 #define SPDK_CONFIG_VFIO_USER 1 00:10:30.795 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:30.795 #define SPDK_CONFIG_VHOST 1 00:10:30.795 #define SPDK_CONFIG_VIRTIO 1 00:10:30.795 #undef SPDK_CONFIG_VTUNE 00:10:30.795 #define SPDK_CONFIG_VTUNE_DIR 00:10:30.795 #define SPDK_CONFIG_WERROR 1 00:10:30.795 #define SPDK_CONFIG_WPDK_DIR 00:10:30.795 #undef SPDK_CONFIG_XNVME 00:10:30.795 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:30.795 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.796 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 342168 ]] 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 342168 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.fd5sGw 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fd5sGw/tests/target /tmp/spdk.fd5sGw 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:30.797 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189183033344 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6780928000 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981386752 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=593920 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:30.798 * Looking for test storage... 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189183033344 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8995520512 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.798 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.799 --rc genhtml_branch_coverage=1 00:10:30.799 --rc genhtml_function_coverage=1 00:10:30.799 --rc genhtml_legend=1 00:10:30.799 --rc geninfo_all_blocks=1 00:10:30.799 --rc geninfo_unexecuted_blocks=1 00:10:30.799 00:10:30.799 ' 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.799 --rc genhtml_branch_coverage=1 00:10:30.799 --rc genhtml_function_coverage=1 00:10:30.799 --rc genhtml_legend=1 00:10:30.799 --rc geninfo_all_blocks=1 00:10:30.799 --rc geninfo_unexecuted_blocks=1 00:10:30.799 00:10:30.799 ' 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.799 --rc genhtml_branch_coverage=1 00:10:30.799 --rc genhtml_function_coverage=1 00:10:30.799 --rc genhtml_legend=1 00:10:30.799 --rc geninfo_all_blocks=1 00:10:30.799 --rc geninfo_unexecuted_blocks=1 00:10:30.799 00:10:30.799 ' 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.799 --rc genhtml_branch_coverage=1 00:10:30.799 --rc genhtml_function_coverage=1 00:10:30.799 --rc genhtml_legend=1 00:10:30.799 --rc geninfo_all_blocks=1 00:10:30.799 --rc geninfo_unexecuted_blocks=1 00:10:30.799 00:10:30.799 ' 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.799 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.058 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.058 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.058 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.058 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.059 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:37.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:37.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:37.694 Found net devices under 0000:86:00.0: cvl_0_0 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:37.694 Found net devices under 0000:86:00.1: cvl_0_1 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:10:37.694 00:10:37.694 --- 10.0.0.2 ping statistics --- 00:10:37.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.694 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:10:37.694 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:37.695 00:10:37.695 --- 10.0.0.1 ping statistics --- 00:10:37.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.695 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 ************************************ 00:10:37.695 START TEST nvmf_filesystem_no_in_capsule 00:10:37.695 ************************************ 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=345429 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 345429 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.695 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 345429 ']' 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 [2024-11-20 12:20:20.048685] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:37.695 [2024-11-20 12:20:20.048730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.695 [2024-11-20 12:20:20.132200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.695 [2024-11-20 12:20:20.175423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.695 [2024-11-20 12:20:20.175460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.695 [2024-11-20 12:20:20.175467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.695 [2024-11-20 12:20:20.175474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.695 [2024-11-20 12:20:20.175479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.695 [2024-11-20 12:20:20.177052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.695 [2024-11-20 12:20:20.177081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.695 [2024-11-20 12:20:20.177189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.695 [2024-11-20 12:20:20.177190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 [2024-11-20 12:20:20.317545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 [2024-11-20 12:20:20.459839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:37.695 { 00:10:37.695 "name": "Malloc1", 00:10:37.695 "aliases": [ 00:10:37.695 "3eb46dc7-4feb-465a-95f3-895022feb665" 00:10:37.695 ], 00:10:37.695 "product_name": "Malloc disk", 00:10:37.695 "block_size": 512, 00:10:37.695 "num_blocks": 1048576, 00:10:37.695 "uuid": "3eb46dc7-4feb-465a-95f3-895022feb665", 00:10:37.695 "assigned_rate_limits": { 00:10:37.695 "rw_ios_per_sec": 0, 00:10:37.695 "rw_mbytes_per_sec": 0, 00:10:37.695 "r_mbytes_per_sec": 0, 00:10:37.695 "w_mbytes_per_sec": 0 00:10:37.695 }, 00:10:37.695 "claimed": true, 00:10:37.695 "claim_type": "exclusive_write", 00:10:37.695 "zoned": false, 00:10:37.695 "supported_io_types": { 00:10:37.695 "read": true, 00:10:37.695 "write": true, 00:10:37.695 "unmap": true, 00:10:37.695 "flush": true, 00:10:37.695 "reset": true, 00:10:37.696 "nvme_admin": false, 00:10:37.696 "nvme_io": false, 00:10:37.696 "nvme_io_md": false, 00:10:37.696 "write_zeroes": true, 00:10:37.696 "zcopy": true, 00:10:37.696 "get_zone_info": false, 00:10:37.696 "zone_management": false, 00:10:37.696 "zone_append": false, 00:10:37.696 "compare": false, 00:10:37.696 "compare_and_write": false, 00:10:37.696 "abort": true, 00:10:37.696 "seek_hole": false, 00:10:37.696 "seek_data": false, 00:10:37.696 "copy": true, 00:10:37.696 "nvme_iov_md": false 00:10:37.696 }, 00:10:37.696 "memory_domains": [ 00:10:37.696 { 00:10:37.696 "dma_device_id": "system", 00:10:37.696 "dma_device_type": 1 00:10:37.696 }, 00:10:37.696 { 00:10:37.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.696 "dma_device_type": 2 00:10:37.696 } 00:10:37.696 ], 00:10:37.696 "driver_specific": {} 00:10:37.696 } 00:10:37.696 ]' 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:37.696 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.632 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.632 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.632 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.632 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:38.632 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:40.634 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:41.198 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:41.198 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.133 ************************************ 00:10:42.133 START TEST filesystem_ext4 00:10:42.133 ************************************ 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:42.133 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:42.133 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.391 Discarding device blocks: 0/522240 done 00:10:42.391 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:42.391 Filesystem UUID: 44bff797-6508-480c-a0e9-6c429f0e8291 00:10:42.391 Superblock backups stored on blocks: 00:10:42.391 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:42.391 00:10:42.391 Allocating group tables: 0/64 done 00:10:42.391 Writing inode tables: 0/64 done 00:10:42.391 Creating journal (8192 blocks): done 00:10:42.391 Writing superblocks and filesystem accounting information: 0/64 done 00:10:42.391 00:10:42.391 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:42.391 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 345429 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:48.976 00:10:48.976 real 0m6.297s 00:10:48.976 user 0m0.030s 00:10:48.976 sys 0m0.067s 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:48.976 ************************************ 00:10:48.976 END TEST filesystem_ext4 00:10:48.976 ************************************ 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.976 ************************************ 00:10:48.976 START TEST filesystem_btrfs 00:10:48.976 ************************************ 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:48.976 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:48.976 btrfs-progs v6.8.1 00:10:48.976 See https://btrfs.readthedocs.io for more information. 00:10:48.976 00:10:48.976 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:48.976 NOTE: several default settings have changed in version 5.15, please make sure 00:10:48.976 this does not affect your deployments: 00:10:48.976 - DUP for metadata (-m dup) 00:10:48.976 - enabled no-holes (-O no-holes) 00:10:48.976 - enabled free-space-tree (-R free-space-tree) 00:10:48.976 00:10:48.976 Label: (null) 00:10:48.976 UUID: ea2ebfd0-777e-42ed-bef3-b7545aee1ca6 00:10:48.976 Node size: 16384 00:10:48.976 Sector size: 4096 (CPU page size: 4096) 00:10:48.976 Filesystem size: 510.00MiB 00:10:48.976 Block group profiles: 00:10:48.976 Data: single 8.00MiB 00:10:48.976 Metadata: DUP 32.00MiB 00:10:48.976 System: DUP 8.00MiB 00:10:48.976 SSD detected: yes 00:10:48.977 Zoned device: no 00:10:48.977 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:48.977 Checksum: crc32c 00:10:48.977 Number of devices: 1 00:10:48.977 Devices: 00:10:48.977 ID SIZE PATH 00:10:48.977 1 510.00MiB /dev/nvme0n1p1 00:10:48.977 00:10:48.977 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:48.977 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 345429 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:49.236 00:10:49.236 real 0m0.657s 00:10:49.236 user 0m0.019s 00:10:49.236 sys 0m0.122s 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:49.236 ************************************ 00:10:49.236 END TEST filesystem_btrfs 00:10:49.236 ************************************ 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.236 ************************************ 00:10:49.236 START TEST filesystem_xfs 00:10:49.236 ************************************ 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:49.236 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:49.495 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:49.495 = sectsz=512 attr=2, projid32bit=1 00:10:49.495 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:49.495 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:49.495 data = bsize=4096 blocks=130560, imaxpct=25 00:10:49.495 = sunit=0 swidth=0 blks 00:10:49.495 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:49.495 log =internal log bsize=4096 blocks=16384, version=2 00:10:49.495 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:49.495 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:50.429 Discarding blocks...Done. 00:10:50.429 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:50.429 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 345429 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.957 00:10:52.957 real 0m3.593s 00:10:52.957 user 0m0.020s 00:10:52.957 sys 0m0.079s 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.957 ************************************ 00:10:52.957 END TEST filesystem_xfs 00:10:52.957 ************************************ 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:52.957 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 345429 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 345429 ']' 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 345429 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.957 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 345429 00:10:53.216 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.216 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.216 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 345429' 00:10:53.216 killing process with pid 345429 00:10:53.216 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 345429 00:10:53.216 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 345429 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:53.475 00:10:53.475 real 0m16.441s 00:10:53.475 user 1m4.663s 00:10:53.475 sys 0m1.380s 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.475 ************************************ 00:10:53.475 END TEST nvmf_filesystem_no_in_capsule 00:10:53.475 ************************************ 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.475 ************************************ 00:10:53.475 START TEST nvmf_filesystem_in_capsule 00:10:53.475 ************************************ 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=348352 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 348352 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 348352 ']' 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.475 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.476 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 [2024-11-20 12:20:36.567066] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:10:53.476 [2024-11-20 12:20:36.567113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.735 [2024-11-20 12:20:36.645628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.735 [2024-11-20 12:20:36.688662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.735 [2024-11-20 12:20:36.688699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.735 [2024-11-20 12:20:36.688711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.735 [2024-11-20 12:20:36.688717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.735 [2024-11-20 12:20:36.688722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.735 [2024-11-20 12:20:36.690200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.735 [2024-11-20 12:20:36.690310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.735 [2024-11-20 12:20:36.690415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.735 [2024-11-20 12:20:36.690415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.735 [2024-11-20 12:20:36.832279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.735 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 Malloc1 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 [2024-11-20 12:20:36.986672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.995 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:53.995 { 00:10:53.995 "name": "Malloc1", 00:10:53.995 "aliases": [ 00:10:53.995 "b11a2e28-04bf-4f17-af12-96d905f1c7c1" 00:10:53.995 ], 00:10:53.995 "product_name": "Malloc disk", 00:10:53.995 "block_size": 512, 00:10:53.995 "num_blocks": 1048576, 00:10:53.995 "uuid": "b11a2e28-04bf-4f17-af12-96d905f1c7c1", 00:10:53.995 "assigned_rate_limits": { 00:10:53.995 "rw_ios_per_sec": 0, 00:10:53.995 "rw_mbytes_per_sec": 0, 00:10:53.995 "r_mbytes_per_sec": 0, 00:10:53.995 "w_mbytes_per_sec": 0 00:10:53.995 }, 00:10:53.995 "claimed": true, 00:10:53.995 "claim_type": "exclusive_write", 00:10:53.995 "zoned": false, 00:10:53.995 "supported_io_types": { 00:10:53.995 "read": true, 00:10:53.995 "write": true, 00:10:53.995 "unmap": true, 00:10:53.995 "flush": true, 00:10:53.995 "reset": true, 00:10:53.995 "nvme_admin": false, 00:10:53.995 "nvme_io": false, 00:10:53.995 "nvme_io_md": false, 00:10:53.995 "write_zeroes": true, 00:10:53.995 "zcopy": true, 00:10:53.995 "get_zone_info": false, 00:10:53.995 "zone_management": false, 00:10:53.995 "zone_append": false, 00:10:53.995 "compare": false, 00:10:53.995 "compare_and_write": false, 00:10:53.995 "abort": true, 00:10:53.995 "seek_hole": false, 00:10:53.995 "seek_data": false, 00:10:53.995 "copy": true, 00:10:53.995 "nvme_iov_md": false 00:10:53.995 }, 00:10:53.995 "memory_domains": [ 00:10:53.995 { 00:10:53.995 "dma_device_id": "system", 00:10:53.995 "dma_device_type": 1 00:10:53.995 }, 00:10:53.995 { 00:10:53.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.995 "dma_device_type": 2 00:10:53.995 } 00:10:53.995 ], 00:10:53.995 "driver_specific": {} 00:10:53.995 } 00:10:53.995 ]' 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:53.995 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.373 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.373 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:55.373 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.373 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:55.373 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:57.276 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:57.535 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:57.794 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.730 ************************************ 00:10:58.730 START TEST filesystem_in_capsule_ext4 00:10:58.730 ************************************ 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:58.730 12:20:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:58.730 mke2fs 1.47.0 (5-Feb-2023) 00:10:58.989 Discarding device blocks: 0/522240 done 00:10:58.989 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:58.989 Filesystem UUID: 21f555e2-689d-4b17-bcb4-556254b05b2f 00:10:58.989 Superblock backups stored on blocks: 00:10:58.989 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:58.989 00:10:58.989 Allocating group tables: 0/64 done 00:10:58.989 Writing inode tables: 0/64 done 00:11:01.523 Creating journal (8192 blocks): done 00:11:01.523 Writing superblocks and filesystem accounting information: 0/64 done 00:11:01.523 00:11:01.523 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:01.523 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 348352 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.090 00:11:08.090 real 0m8.896s 00:11:08.090 user 0m0.028s 00:11:08.090 sys 0m0.075s 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:08.090 ************************************ 00:11:08.090 END TEST filesystem_in_capsule_ext4 00:11:08.090 ************************************ 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.090 ************************************ 00:11:08.090 START TEST filesystem_in_capsule_btrfs 00:11:08.090 ************************************ 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:08.090 btrfs-progs v6.8.1 00:11:08.090 See https://btrfs.readthedocs.io for more information. 00:11:08.090 00:11:08.090 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:08.090 NOTE: several default settings have changed in version 5.15, please make sure 00:11:08.090 this does not affect your deployments: 00:11:08.090 - DUP for metadata (-m dup) 00:11:08.090 - enabled no-holes (-O no-holes) 00:11:08.090 - enabled free-space-tree (-R free-space-tree) 00:11:08.090 00:11:08.090 Label: (null) 00:11:08.090 UUID: dc6d6bea-f922-437e-8634-ca06da482473 00:11:08.090 Node size: 16384 00:11:08.090 Sector size: 4096 (CPU page size: 4096) 00:11:08.090 Filesystem size: 510.00MiB 00:11:08.090 Block group profiles: 00:11:08.090 Data: single 8.00MiB 00:11:08.090 Metadata: DUP 32.00MiB 00:11:08.090 System: DUP 8.00MiB 00:11:08.090 SSD detected: yes 00:11:08.090 Zoned device: no 00:11:08.090 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:08.090 Checksum: crc32c 00:11:08.090 Number of devices: 1 00:11:08.090 Devices: 00:11:08.090 ID SIZE PATH 00:11:08.090 1 510.00MiB /dev/nvme0n1p1 00:11:08.090 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:08.090 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 348352 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.349 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.609 00:11:08.609 real 0m0.719s 00:11:08.609 user 0m0.021s 00:11:08.609 sys 0m0.121s 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:08.609 ************************************ 00:11:08.609 END TEST filesystem_in_capsule_btrfs 00:11:08.609 ************************************ 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.609 ************************************ 00:11:08.609 START TEST filesystem_in_capsule_xfs 00:11:08.609 ************************************ 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:08.609 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:08.609 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:08.609 = sectsz=512 attr=2, projid32bit=1 00:11:08.609 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:08.609 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:08.609 data = bsize=4096 blocks=130560, imaxpct=25 00:11:08.609 = sunit=0 swidth=0 blks 00:11:08.609 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:08.609 log =internal log bsize=4096 blocks=16384, version=2 00:11:08.609 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:08.609 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:09.546 Discarding blocks...Done. 00:11:09.546 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:09.546 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.452 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 348352 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.453 00:11:11.453 real 0m2.627s 00:11:11.453 user 0m0.024s 00:11:11.453 sys 0m0.074s 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.453 ************************************ 00:11:11.453 END TEST filesystem_in_capsule_xfs 00:11:11.453 ************************************ 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 348352 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 348352 ']' 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 348352 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 348352 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 348352' 00:11:11.453 killing process with pid 348352 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 348352 00:11:11.453 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 348352 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:12.022 00:11:12.022 real 0m18.329s 00:11:12.022 user 1m12.105s 00:11:12.022 sys 0m1.458s 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.022 ************************************ 00:11:12.022 END TEST nvmf_filesystem_in_capsule 00:11:12.022 ************************************ 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.022 rmmod nvme_tcp 00:11:12.022 rmmod nvme_fabrics 00:11:12.022 rmmod nvme_keyring 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.022 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.929 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.929 00:11:13.929 real 0m43.553s 00:11:13.929 user 2m18.806s 00:11:13.929 sys 0m7.614s 00:11:13.929 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.929 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.929 ************************************ 00:11:13.929 END TEST nvmf_filesystem 00:11:13.929 ************************************ 00:11:14.189 12:20:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:14.189 12:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.190 ************************************ 00:11:14.190 START TEST nvmf_target_discovery 00:11:14.190 ************************************ 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:14.190 * Looking for test storage... 00:11:14.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.190 --rc genhtml_branch_coverage=1 00:11:14.190 --rc genhtml_function_coverage=1 00:11:14.190 --rc genhtml_legend=1 00:11:14.190 --rc geninfo_all_blocks=1 00:11:14.190 --rc geninfo_unexecuted_blocks=1 00:11:14.190 00:11:14.190 ' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.190 --rc genhtml_branch_coverage=1 00:11:14.190 --rc genhtml_function_coverage=1 00:11:14.190 --rc genhtml_legend=1 00:11:14.190 --rc geninfo_all_blocks=1 00:11:14.190 --rc geninfo_unexecuted_blocks=1 00:11:14.190 00:11:14.190 ' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.190 --rc genhtml_branch_coverage=1 00:11:14.190 --rc genhtml_function_coverage=1 00:11:14.190 --rc genhtml_legend=1 00:11:14.190 --rc geninfo_all_blocks=1 00:11:14.190 --rc geninfo_unexecuted_blocks=1 00:11:14.190 00:11:14.190 ' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.190 --rc genhtml_branch_coverage=1 00:11:14.190 --rc genhtml_function_coverage=1 00:11:14.190 --rc genhtml_legend=1 00:11:14.190 --rc geninfo_all_blocks=1 00:11:14.190 --rc geninfo_unexecuted_blocks=1 00:11:14.190 00:11:14.190 ' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.190 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.191 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.450 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.450 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.450 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.450 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.020 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:21.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:21.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:21.021 Found net devices under 0000:86:00.0: cvl_0_0 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:21.021 Found net devices under 0000:86:00.1: cvl_0_1 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.021 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:11:21.021 00:11:21.021 --- 10.0.0.2 ping statistics --- 00:11:21.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.021 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:11:21.021 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:11:21.022 00:11:21.022 --- 10.0.0.1 ping statistics --- 00:11:21.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.022 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=355213 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 355213 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 355213 ']' 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 [2024-11-20 12:21:03.337567] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:21.022 [2024-11-20 12:21:03.337618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.022 [2024-11-20 12:21:03.417024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.022 [2024-11-20 12:21:03.459992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.022 [2024-11-20 12:21:03.460032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.022 [2024-11-20 12:21:03.460041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.022 [2024-11-20 12:21:03.460047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.022 [2024-11-20 12:21:03.460052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.022 [2024-11-20 12:21:03.461487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.022 [2024-11-20 12:21:03.461598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.022 [2024-11-20 12:21:03.461723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.022 [2024-11-20 12:21:03.461724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 [2024-11-20 12:21:03.606206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 Null1 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 [2024-11-20 12:21:03.651729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 Null2 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 Null3 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.022 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 Null4 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:21.023 00:11:21.023 Discovery Log Number of Records 6, Generation counter 6 00:11:21.023 =====Discovery Log Entry 0====== 00:11:21.023 trtype: tcp 00:11:21.023 adrfam: ipv4 00:11:21.023 subtype: current discovery subsystem 00:11:21.023 treq: not required 00:11:21.023 portid: 0 00:11:21.023 trsvcid: 4420 00:11:21.023 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:21.023 traddr: 10.0.0.2 00:11:21.023 eflags: explicit discovery connections, duplicate discovery information 00:11:21.023 sectype: none 00:11:21.023 =====Discovery Log Entry 1====== 00:11:21.023 trtype: tcp 00:11:21.023 adrfam: ipv4 00:11:21.023 subtype: nvme subsystem 00:11:21.023 treq: not required 00:11:21.023 portid: 0 00:11:21.023 trsvcid: 4420 00:11:21.023 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:21.023 traddr: 10.0.0.2 00:11:21.023 eflags: none 00:11:21.023 sectype: none 00:11:21.023 =====Discovery Log Entry 2====== 00:11:21.023 trtype: tcp 00:11:21.023 adrfam: ipv4 00:11:21.023 subtype: nvme subsystem 00:11:21.023 treq: not required 00:11:21.023 portid: 0 00:11:21.023 trsvcid: 4420 00:11:21.023 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:21.023 traddr: 10.0.0.2 00:11:21.023 eflags: none 00:11:21.023 sectype: none 00:11:21.023 =====Discovery Log Entry 3====== 00:11:21.023 trtype: tcp 00:11:21.023 adrfam: ipv4 00:11:21.023 subtype: nvme subsystem 00:11:21.023 treq: not required 00:11:21.023 portid: 0 00:11:21.023 trsvcid: 4420 00:11:21.023 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:21.023 traddr: 10.0.0.2 00:11:21.023 eflags: none 00:11:21.023 sectype: none 00:11:21.023 =====Discovery Log Entry 4====== 00:11:21.023 trtype: tcp 00:11:21.023 adrfam: ipv4 00:11:21.023 subtype: nvme subsystem 00:11:21.023 treq: not required 00:11:21.023 portid: 0 00:11:21.023 trsvcid: 4420 00:11:21.023 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:21.023 traddr: 10.0.0.2 00:11:21.023 eflags: none 00:11:21.023 sectype: none 00:11:21.023 =====Discovery Log Entry 5====== 00:11:21.023 trtype: tcp 00:11:21.023 adrfam: ipv4 00:11:21.023 subtype: discovery subsystem referral 00:11:21.023 treq: not required 00:11:21.023 portid: 0 00:11:21.023 trsvcid: 4430 00:11:21.023 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:21.023 traddr: 10.0.0.2 00:11:21.023 eflags: none 00:11:21.023 sectype: none 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:21.023 Perform nvmf subsystem discovery via RPC 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.023 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.023 [ 00:11:21.023 { 00:11:21.023 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:21.023 "subtype": "Discovery", 00:11:21.023 "listen_addresses": [ 00:11:21.023 { 00:11:21.023 "trtype": "TCP", 00:11:21.023 "adrfam": "IPv4", 00:11:21.023 "traddr": "10.0.0.2", 00:11:21.023 "trsvcid": "4420" 00:11:21.023 } 00:11:21.023 ], 00:11:21.023 "allow_any_host": true, 00:11:21.023 "hosts": [] 00:11:21.023 }, 00:11:21.023 { 00:11:21.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.023 "subtype": "NVMe", 00:11:21.023 "listen_addresses": [ 00:11:21.023 { 00:11:21.023 "trtype": "TCP", 00:11:21.023 "adrfam": "IPv4", 00:11:21.023 "traddr": "10.0.0.2", 00:11:21.023 "trsvcid": "4420" 00:11:21.023 } 00:11:21.023 ], 00:11:21.023 "allow_any_host": true, 00:11:21.023 "hosts": [], 00:11:21.023 "serial_number": "SPDK00000000000001", 00:11:21.023 "model_number": "SPDK bdev Controller", 00:11:21.023 "max_namespaces": 32, 00:11:21.023 "min_cntlid": 1, 00:11:21.023 "max_cntlid": 65519, 00:11:21.023 "namespaces": [ 00:11:21.023 { 00:11:21.023 "nsid": 1, 00:11:21.023 "bdev_name": "Null1", 00:11:21.023 "name": "Null1", 00:11:21.023 "nguid": "63A8CAC3240A4BAA83B8B769A30BDCE8", 00:11:21.023 "uuid": "63a8cac3-240a-4baa-83b8-b769a30bdce8" 00:11:21.023 } 00:11:21.023 ] 00:11:21.023 }, 00:11:21.023 { 00:11:21.023 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:21.023 "subtype": "NVMe", 00:11:21.023 "listen_addresses": [ 00:11:21.023 { 00:11:21.023 "trtype": "TCP", 00:11:21.023 "adrfam": "IPv4", 00:11:21.023 "traddr": "10.0.0.2", 00:11:21.023 "trsvcid": "4420" 00:11:21.023 } 00:11:21.023 ], 00:11:21.023 "allow_any_host": true, 00:11:21.023 "hosts": [], 00:11:21.023 "serial_number": "SPDK00000000000002", 00:11:21.023 "model_number": "SPDK bdev Controller", 00:11:21.023 "max_namespaces": 32, 00:11:21.023 "min_cntlid": 1, 00:11:21.023 "max_cntlid": 65519, 00:11:21.023 "namespaces": [ 00:11:21.023 { 00:11:21.023 "nsid": 1, 00:11:21.023 "bdev_name": "Null2", 00:11:21.023 "name": "Null2", 00:11:21.023 "nguid": "E694CF0A4B21404F9CF8B1C3306A1552", 00:11:21.023 "uuid": "e694cf0a-4b21-404f-9cf8-b1c3306a1552" 00:11:21.023 } 00:11:21.023 ] 00:11:21.023 }, 00:11:21.023 { 00:11:21.023 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:21.023 "subtype": "NVMe", 00:11:21.023 "listen_addresses": [ 00:11:21.023 { 00:11:21.023 "trtype": "TCP", 00:11:21.023 "adrfam": "IPv4", 00:11:21.023 "traddr": "10.0.0.2", 00:11:21.023 "trsvcid": "4420" 00:11:21.023 } 00:11:21.023 ], 00:11:21.023 "allow_any_host": true, 00:11:21.023 "hosts": [], 00:11:21.023 "serial_number": "SPDK00000000000003", 00:11:21.023 "model_number": "SPDK bdev Controller", 00:11:21.023 "max_namespaces": 32, 00:11:21.023 "min_cntlid": 1, 00:11:21.023 "max_cntlid": 65519, 00:11:21.023 "namespaces": [ 00:11:21.023 { 00:11:21.023 "nsid": 1, 00:11:21.023 "bdev_name": "Null3", 00:11:21.023 "name": "Null3", 00:11:21.023 "nguid": "6B476BA1E780423192BD6DA116DC67B4", 00:11:21.023 "uuid": "6b476ba1-e780-4231-92bd-6da116dc67b4" 00:11:21.023 } 00:11:21.023 ] 00:11:21.023 }, 00:11:21.023 { 00:11:21.023 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:21.023 "subtype": "NVMe", 00:11:21.023 "listen_addresses": [ 00:11:21.023 { 00:11:21.023 "trtype": "TCP", 00:11:21.023 "adrfam": "IPv4", 00:11:21.023 "traddr": "10.0.0.2", 00:11:21.023 "trsvcid": "4420" 00:11:21.023 } 00:11:21.023 ], 00:11:21.023 "allow_any_host": true, 00:11:21.023 "hosts": [], 00:11:21.024 "serial_number": "SPDK00000000000004", 00:11:21.024 "model_number": "SPDK bdev Controller", 00:11:21.024 "max_namespaces": 32, 00:11:21.024 "min_cntlid": 1, 00:11:21.024 "max_cntlid": 65519, 00:11:21.024 "namespaces": [ 00:11:21.024 { 00:11:21.024 "nsid": 1, 00:11:21.024 "bdev_name": "Null4", 00:11:21.024 "name": "Null4", 00:11:21.024 "nguid": "40A274A65F8D4DA49599618538B845CB", 00:11:21.024 "uuid": "40a274a6-5f8d-4da4-9599-618538b845cb" 00:11:21.024 } 00:11:21.024 ] 00:11:21.024 } 00:11:21.024 ] 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.024 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.024 rmmod nvme_tcp 00:11:21.024 rmmod nvme_fabrics 00:11:21.024 rmmod nvme_keyring 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 355213 ']' 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 355213 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 355213 ']' 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 355213 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355213 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355213' 00:11:21.284 killing process with pid 355213 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 355213 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 355213 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.284 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.823 00:11:23.823 real 0m9.355s 00:11:23.823 user 0m5.512s 00:11:23.823 sys 0m4.882s 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.823 ************************************ 00:11:23.823 END TEST nvmf_target_discovery 00:11:23.823 ************************************ 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.823 ************************************ 00:11:23.823 START TEST nvmf_referrals 00:11:23.823 ************************************ 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:23.823 * Looking for test storage... 00:11:23.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.823 --rc genhtml_branch_coverage=1 00:11:23.823 --rc genhtml_function_coverage=1 00:11:23.823 --rc genhtml_legend=1 00:11:23.823 --rc geninfo_all_blocks=1 00:11:23.823 --rc geninfo_unexecuted_blocks=1 00:11:23.823 00:11:23.823 ' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.823 --rc genhtml_branch_coverage=1 00:11:23.823 --rc genhtml_function_coverage=1 00:11:23.823 --rc genhtml_legend=1 00:11:23.823 --rc geninfo_all_blocks=1 00:11:23.823 --rc geninfo_unexecuted_blocks=1 00:11:23.823 00:11:23.823 ' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.823 --rc genhtml_branch_coverage=1 00:11:23.823 --rc genhtml_function_coverage=1 00:11:23.823 --rc genhtml_legend=1 00:11:23.823 --rc geninfo_all_blocks=1 00:11:23.823 --rc geninfo_unexecuted_blocks=1 00:11:23.823 00:11:23.823 ' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.823 --rc genhtml_branch_coverage=1 00:11:23.823 --rc genhtml_function_coverage=1 00:11:23.823 --rc genhtml_legend=1 00:11:23.823 --rc geninfo_all_blocks=1 00:11:23.823 --rc geninfo_unexecuted_blocks=1 00:11:23.823 00:11:23.823 ' 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.823 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.824 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.389 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:30.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:30.390 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:30.390 Found net devices under 0000:86:00.0: cvl_0_0 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:30.390 Found net devices under 0000:86:00.1: cvl_0_1 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:11:30.390 00:11:30.390 --- 10.0.0.2 ping statistics --- 00:11:30.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.390 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:11:30.390 00:11:30.390 --- 10.0.0.1 ping statistics --- 00:11:30.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.390 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=359241 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 359241 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 359241 ']' 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.390 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.390 [2024-11-20 12:21:12.770795] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:30.390 [2024-11-20 12:21:12.770840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.390 [2024-11-20 12:21:12.852087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.390 [2024-11-20 12:21:12.894708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.390 [2024-11-20 12:21:12.894745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.390 [2024-11-20 12:21:12.894753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.390 [2024-11-20 12:21:12.894759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.391 [2024-11-20 12:21:12.894764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.391 [2024-11-20 12:21:12.896287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.391 [2024-11-20 12:21:12.896401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.391 [2024-11-20 12:21:12.896507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.391 [2024-11-20 12:21:12.896509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.649 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.649 [2024-11-20 12:21:13.656211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 [2024-11-20 12:21:13.669552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.650 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.909 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.909 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.168 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.427 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:31.686 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:31.687 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:31.687 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.687 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.946 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.946 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:31.946 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.946 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:31.946 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:31.946 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.946 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.204 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:32.462 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:32.462 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:32.462 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.462 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.462 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.462 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:32.463 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.722 rmmod nvme_tcp 00:11:32.722 rmmod nvme_fabrics 00:11:32.722 rmmod nvme_keyring 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 359241 ']' 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 359241 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 359241 ']' 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 359241 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 359241 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 359241' 00:11:32.722 killing process with pid 359241 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 359241 00:11:32.722 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 359241 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.982 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.889 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.889 00:11:34.889 real 0m11.473s 00:11:34.889 user 0m14.855s 00:11:34.889 sys 0m5.260s 00:11:34.889 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.889 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.889 ************************************ 00:11:34.889 END TEST nvmf_referrals 00:11:34.889 ************************************ 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.148 ************************************ 00:11:35.148 START TEST nvmf_connect_disconnect 00:11:35.148 ************************************ 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.148 * Looking for test storage... 00:11:35.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:35.148 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.149 --rc genhtml_branch_coverage=1 00:11:35.149 --rc genhtml_function_coverage=1 00:11:35.149 --rc genhtml_legend=1 00:11:35.149 --rc geninfo_all_blocks=1 00:11:35.149 --rc geninfo_unexecuted_blocks=1 00:11:35.149 00:11:35.149 ' 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.149 --rc genhtml_branch_coverage=1 00:11:35.149 --rc genhtml_function_coverage=1 00:11:35.149 --rc genhtml_legend=1 00:11:35.149 --rc geninfo_all_blocks=1 00:11:35.149 --rc geninfo_unexecuted_blocks=1 00:11:35.149 00:11:35.149 ' 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.149 --rc genhtml_branch_coverage=1 00:11:35.149 --rc genhtml_function_coverage=1 00:11:35.149 --rc genhtml_legend=1 00:11:35.149 --rc geninfo_all_blocks=1 00:11:35.149 --rc geninfo_unexecuted_blocks=1 00:11:35.149 00:11:35.149 ' 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.149 --rc genhtml_branch_coverage=1 00:11:35.149 --rc genhtml_function_coverage=1 00:11:35.149 --rc genhtml_legend=1 00:11:35.149 --rc geninfo_all_blocks=1 00:11:35.149 --rc geninfo_unexecuted_blocks=1 00:11:35.149 00:11:35.149 ' 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.149 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.408 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.980 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:41.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:41.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:41.981 Found net devices under 0000:86:00.0: cvl_0_0 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:41.981 Found net devices under 0000:86:00.1: cvl_0_1 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.981 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:11:41.981 00:11:41.981 --- 10.0.0.2 ping statistics --- 00:11:41.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.981 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:11:41.981 00:11:41.981 --- 10.0.0.1 ping statistics --- 00:11:41.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.981 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.981 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=363327 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 363327 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 363327 ']' 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 [2024-11-20 12:21:24.291462] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:11:41.982 [2024-11-20 12:21:24.291509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.982 [2024-11-20 12:21:24.356526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.982 [2024-11-20 12:21:24.400346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.982 [2024-11-20 12:21:24.400382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.982 [2024-11-20 12:21:24.400389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.982 [2024-11-20 12:21:24.400395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.982 [2024-11-20 12:21:24.400401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.982 [2024-11-20 12:21:24.402003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.982 [2024-11-20 12:21:24.402245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.982 [2024-11-20 12:21:24.402267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.982 [2024-11-20 12:21:24.402267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 [2024-11-20 12:21:24.547040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 [2024-11-20 12:21:24.612406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:41.982 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:45.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.936 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.936 rmmod nvme_tcp 00:11:57.936 rmmod nvme_fabrics 00:11:57.936 rmmod nvme_keyring 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 363327 ']' 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 363327 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 363327 ']' 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 363327 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.936 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 363327 00:11:58.195 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.195 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.195 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 363327' 00:11:58.195 killing process with pid 363327 00:11:58.195 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 363327 00:11:58.195 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 363327 00:11:58.195 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.196 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.733 00:12:00.733 real 0m25.273s 00:12:00.733 user 1m8.691s 00:12:00.733 sys 0m5.823s 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.733 ************************************ 00:12:00.733 END TEST nvmf_connect_disconnect 00:12:00.733 ************************************ 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.733 ************************************ 00:12:00.733 START TEST nvmf_multitarget 00:12:00.733 ************************************ 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.733 * Looking for test storage... 00:12:00.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.733 --rc genhtml_branch_coverage=1 00:12:00.733 --rc genhtml_function_coverage=1 00:12:00.733 --rc genhtml_legend=1 00:12:00.733 --rc geninfo_all_blocks=1 00:12:00.733 --rc geninfo_unexecuted_blocks=1 00:12:00.733 00:12:00.733 ' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.733 --rc genhtml_branch_coverage=1 00:12:00.733 --rc genhtml_function_coverage=1 00:12:00.733 --rc genhtml_legend=1 00:12:00.733 --rc geninfo_all_blocks=1 00:12:00.733 --rc geninfo_unexecuted_blocks=1 00:12:00.733 00:12:00.733 ' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.733 --rc genhtml_branch_coverage=1 00:12:00.733 --rc genhtml_function_coverage=1 00:12:00.733 --rc genhtml_legend=1 00:12:00.733 --rc geninfo_all_blocks=1 00:12:00.733 --rc geninfo_unexecuted_blocks=1 00:12:00.733 00:12:00.733 ' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.733 --rc genhtml_branch_coverage=1 00:12:00.733 --rc genhtml_function_coverage=1 00:12:00.733 --rc genhtml_legend=1 00:12:00.733 --rc geninfo_all_blocks=1 00:12:00.733 --rc geninfo_unexecuted_blocks=1 00:12:00.733 00:12:00.733 ' 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.733 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.734 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.308 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.308 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.309 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.309 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.309 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:12:07.309 00:12:07.309 --- 10.0.0.2 ping statistics --- 00:12:07.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.309 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:12:07.309 00:12:07.309 --- 10.0.0.1 ping statistics --- 00:12:07.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.309 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=369727 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 369727 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 369727 ']' 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.309 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.309 [2024-11-20 12:21:49.643397] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:07.309 [2024-11-20 12:21:49.643440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.309 [2024-11-20 12:21:49.723587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.309 [2024-11-20 12:21:49.766961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.309 [2024-11-20 12:21:49.766999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.309 [2024-11-20 12:21:49.767006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.309 [2024-11-20 12:21:49.767012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.309 [2024-11-20 12:21:49.767017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.309 [2024-11-20 12:21:49.768600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.310 [2024-11-20 12:21:49.768717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.310 [2024-11-20 12:21:49.768826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.310 [2024-11-20 12:21:49.768827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:07.310 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:07.310 "nvmf_tgt_1" 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:07.310 "nvmf_tgt_2" 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:07.310 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:07.569 true 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:07.569 true 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.569 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.829 rmmod nvme_tcp 00:12:07.829 rmmod nvme_fabrics 00:12:07.829 rmmod nvme_keyring 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 369727 ']' 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 369727 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 369727 ']' 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 369727 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369727 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369727' 00:12:07.829 killing process with pid 369727 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 369727 00:12:07.829 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 369727 00:12:08.088 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:08.088 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:08.088 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.089 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.996 00:12:09.996 real 0m9.613s 00:12:09.996 user 0m7.291s 00:12:09.996 sys 0m4.879s 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.996 ************************************ 00:12:09.996 END TEST nvmf_multitarget 00:12:09.996 ************************************ 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.996 ************************************ 00:12:09.996 START TEST nvmf_rpc 00:12:09.996 ************************************ 00:12:09.996 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:10.255 * Looking for test storage... 00:12:10.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.255 --rc genhtml_branch_coverage=1 00:12:10.255 --rc genhtml_function_coverage=1 00:12:10.255 --rc genhtml_legend=1 00:12:10.255 --rc geninfo_all_blocks=1 00:12:10.255 --rc geninfo_unexecuted_blocks=1 00:12:10.255 00:12:10.255 ' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.255 --rc genhtml_branch_coverage=1 00:12:10.255 --rc genhtml_function_coverage=1 00:12:10.255 --rc genhtml_legend=1 00:12:10.255 --rc geninfo_all_blocks=1 00:12:10.255 --rc geninfo_unexecuted_blocks=1 00:12:10.255 00:12:10.255 ' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.255 --rc genhtml_branch_coverage=1 00:12:10.255 --rc genhtml_function_coverage=1 00:12:10.255 --rc genhtml_legend=1 00:12:10.255 --rc geninfo_all_blocks=1 00:12:10.255 --rc geninfo_unexecuted_blocks=1 00:12:10.255 00:12:10.255 ' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.255 --rc genhtml_branch_coverage=1 00:12:10.255 --rc genhtml_function_coverage=1 00:12:10.255 --rc genhtml_legend=1 00:12:10.255 --rc geninfo_all_blocks=1 00:12:10.255 --rc geninfo_unexecuted_blocks=1 00:12:10.255 00:12:10.255 ' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:10.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:10.255 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.827 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:16.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:16.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:16.827 Found net devices under 0000:86:00.0: cvl_0_0 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:16.827 Found net devices under 0000:86:00.1: cvl_0_1 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.827 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:12:16.828 00:12:16.828 --- 10.0.0.2 ping statistics --- 00:12:16.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.828 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:16.828 00:12:16.828 --- 10.0.0.1 ping statistics --- 00:12:16.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.828 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=373522 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 373522 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 373522 ']' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 [2024-11-20 12:21:59.336344] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:16.828 [2024-11-20 12:21:59.336396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.828 [2024-11-20 12:21:59.413587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.828 [2024-11-20 12:21:59.456656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.828 [2024-11-20 12:21:59.456697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.828 [2024-11-20 12:21:59.456704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.828 [2024-11-20 12:21:59.456710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.828 [2024-11-20 12:21:59.456715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.828 [2024-11-20 12:21:59.458301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.828 [2024-11-20 12:21:59.458412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.828 [2024-11-20 12:21:59.458520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.828 [2024-11-20 12:21:59.458521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:16.828 "tick_rate": 2300000000, 00:12:16.828 "poll_groups": [ 00:12:16.828 { 00:12:16.828 "name": "nvmf_tgt_poll_group_000", 00:12:16.828 "admin_qpairs": 0, 00:12:16.828 "io_qpairs": 0, 00:12:16.828 "current_admin_qpairs": 0, 00:12:16.828 "current_io_qpairs": 0, 00:12:16.828 "pending_bdev_io": 0, 00:12:16.828 "completed_nvme_io": 0, 00:12:16.828 "transports": [] 00:12:16.828 }, 00:12:16.828 { 00:12:16.828 "name": "nvmf_tgt_poll_group_001", 00:12:16.828 "admin_qpairs": 0, 00:12:16.828 "io_qpairs": 0, 00:12:16.828 "current_admin_qpairs": 0, 00:12:16.828 "current_io_qpairs": 0, 00:12:16.828 "pending_bdev_io": 0, 00:12:16.828 "completed_nvme_io": 0, 00:12:16.828 "transports": [] 00:12:16.828 }, 00:12:16.828 { 00:12:16.828 "name": "nvmf_tgt_poll_group_002", 00:12:16.828 "admin_qpairs": 0, 00:12:16.828 "io_qpairs": 0, 00:12:16.828 "current_admin_qpairs": 0, 00:12:16.828 "current_io_qpairs": 0, 00:12:16.828 "pending_bdev_io": 0, 00:12:16.828 "completed_nvme_io": 0, 00:12:16.828 "transports": [] 00:12:16.828 }, 00:12:16.828 { 00:12:16.828 "name": "nvmf_tgt_poll_group_003", 00:12:16.828 "admin_qpairs": 0, 00:12:16.828 "io_qpairs": 0, 00:12:16.828 "current_admin_qpairs": 0, 00:12:16.828 "current_io_qpairs": 0, 00:12:16.828 "pending_bdev_io": 0, 00:12:16.828 "completed_nvme_io": 0, 00:12:16.828 "transports": [] 00:12:16.828 } 00:12:16.828 ] 00:12:16.828 }' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 [2024-11-20 12:21:59.708542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.828 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:16.828 "tick_rate": 2300000000, 00:12:16.828 "poll_groups": [ 00:12:16.828 { 00:12:16.828 "name": "nvmf_tgt_poll_group_000", 00:12:16.828 "admin_qpairs": 0, 00:12:16.828 "io_qpairs": 0, 00:12:16.828 "current_admin_qpairs": 0, 00:12:16.828 "current_io_qpairs": 0, 00:12:16.828 "pending_bdev_io": 0, 00:12:16.828 "completed_nvme_io": 0, 00:12:16.828 "transports": [ 00:12:16.828 { 00:12:16.828 "trtype": "TCP" 00:12:16.828 } 00:12:16.828 ] 00:12:16.828 }, 00:12:16.829 { 00:12:16.829 "name": "nvmf_tgt_poll_group_001", 00:12:16.829 "admin_qpairs": 0, 00:12:16.829 "io_qpairs": 0, 00:12:16.829 "current_admin_qpairs": 0, 00:12:16.829 "current_io_qpairs": 0, 00:12:16.829 "pending_bdev_io": 0, 00:12:16.829 "completed_nvme_io": 0, 00:12:16.829 "transports": [ 00:12:16.829 { 00:12:16.829 "trtype": "TCP" 00:12:16.829 } 00:12:16.829 ] 00:12:16.829 }, 00:12:16.829 { 00:12:16.829 "name": "nvmf_tgt_poll_group_002", 00:12:16.829 "admin_qpairs": 0, 00:12:16.829 "io_qpairs": 0, 00:12:16.829 "current_admin_qpairs": 0, 00:12:16.829 "current_io_qpairs": 0, 00:12:16.829 "pending_bdev_io": 0, 00:12:16.829 "completed_nvme_io": 0, 00:12:16.829 "transports": [ 00:12:16.829 { 00:12:16.829 "trtype": "TCP" 00:12:16.829 } 00:12:16.829 ] 00:12:16.829 }, 00:12:16.829 { 00:12:16.829 "name": "nvmf_tgt_poll_group_003", 00:12:16.829 "admin_qpairs": 0, 00:12:16.829 "io_qpairs": 0, 00:12:16.829 "current_admin_qpairs": 0, 00:12:16.829 "current_io_qpairs": 0, 00:12:16.829 "pending_bdev_io": 0, 00:12:16.829 "completed_nvme_io": 0, 00:12:16.829 "transports": [ 00:12:16.829 { 00:12:16.829 "trtype": "TCP" 00:12:16.829 } 00:12:16.829 ] 00:12:16.829 } 00:12:16.829 ] 00:12:16.829 }' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.829 Malloc1 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.829 [2024-11-20 12:21:59.888416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:16.829 [2024-11-20 12:21:59.917103] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:16.829 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.829 could not add new controller: failed to write to nvme-fabrics device 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:16.829 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.088 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.024 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.024 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.024 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.024 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.024 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.928 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:20.186 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.187 [2024-11-20 12:22:03.250204] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:20.187 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.187 could not add new controller: failed to write to nvme-fabrics device 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.187 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.565 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.565 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.565 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.565 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.565 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:23.468 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.469 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.727 [2024-11-20 12:22:06.598876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.727 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.663 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.663 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:24.663 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.663 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:24.663 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 [2024-11-20 12:22:09.902412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.198 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.198 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.198 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.198 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.038 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.039 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.039 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.039 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.039 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 [2024-11-20 12:22:13.196226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.298 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.676 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.676 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.676 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.676 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:31.676 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.582 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.583 [2024-11-20 12:22:16.578524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.583 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.961 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.961 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.961 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.961 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.962 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.865 [2024-11-20 12:22:19.902722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.865 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.247 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.247 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.247 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.247 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.247 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.153 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.153 [2024-11-20 12:22:23.268525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 [2024-11-20 12:22:23.316628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.413 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 [2024-11-20 12:22:23.364780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 [2024-11-20 12:22:23.412945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 [2024-11-20 12:22:23.461129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:40.414 "tick_rate": 2300000000, 00:12:40.414 "poll_groups": [ 00:12:40.414 { 00:12:40.414 "name": "nvmf_tgt_poll_group_000", 00:12:40.414 "admin_qpairs": 2, 00:12:40.414 "io_qpairs": 168, 00:12:40.414 "current_admin_qpairs": 0, 00:12:40.414 "current_io_qpairs": 0, 00:12:40.414 "pending_bdev_io": 0, 00:12:40.414 "completed_nvme_io": 366, 00:12:40.414 "transports": [ 00:12:40.414 { 00:12:40.414 "trtype": "TCP" 00:12:40.414 } 00:12:40.414 ] 00:12:40.414 }, 00:12:40.414 { 00:12:40.414 "name": "nvmf_tgt_poll_group_001", 00:12:40.414 "admin_qpairs": 2, 00:12:40.414 "io_qpairs": 168, 00:12:40.414 "current_admin_qpairs": 0, 00:12:40.414 "current_io_qpairs": 0, 00:12:40.414 "pending_bdev_io": 0, 00:12:40.414 "completed_nvme_io": 220, 00:12:40.414 "transports": [ 00:12:40.414 { 00:12:40.414 "trtype": "TCP" 00:12:40.414 } 00:12:40.414 ] 00:12:40.414 }, 00:12:40.414 { 00:12:40.414 "name": "nvmf_tgt_poll_group_002", 00:12:40.414 "admin_qpairs": 1, 00:12:40.414 "io_qpairs": 168, 00:12:40.414 "current_admin_qpairs": 0, 00:12:40.414 "current_io_qpairs": 0, 00:12:40.414 "pending_bdev_io": 0, 00:12:40.414 "completed_nvme_io": 267, 00:12:40.414 "transports": [ 00:12:40.414 { 00:12:40.414 "trtype": "TCP" 00:12:40.414 } 00:12:40.414 ] 00:12:40.414 }, 00:12:40.414 { 00:12:40.414 "name": "nvmf_tgt_poll_group_003", 00:12:40.414 "admin_qpairs": 2, 00:12:40.414 "io_qpairs": 168, 00:12:40.414 "current_admin_qpairs": 0, 00:12:40.414 "current_io_qpairs": 0, 00:12:40.414 "pending_bdev_io": 0, 00:12:40.414 "completed_nvme_io": 169, 00:12:40.414 "transports": [ 00:12:40.414 { 00:12:40.414 "trtype": "TCP" 00:12:40.414 } 00:12:40.414 ] 00:12:40.414 } 00:12:40.414 ] 00:12:40.414 }' 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:40.414 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:40.415 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:40.674 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.675 rmmod nvme_tcp 00:12:40.675 rmmod nvme_fabrics 00:12:40.675 rmmod nvme_keyring 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 373522 ']' 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 373522 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 373522 ']' 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 373522 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 373522 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 373522' 00:12:40.675 killing process with pid 373522 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 373522 00:12:40.675 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 373522 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.934 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.499 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.499 00:12:43.499 real 0m32.895s 00:12:43.499 user 1m39.161s 00:12:43.499 sys 0m6.551s 00:12:43.499 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.499 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.499 ************************************ 00:12:43.499 END TEST nvmf_rpc 00:12:43.499 ************************************ 00:12:43.499 12:22:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.499 12:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.499 12:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.499 12:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.499 ************************************ 00:12:43.499 START TEST nvmf_invalid 00:12:43.499 ************************************ 00:12:43.499 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.499 * Looking for test storage... 00:12:43.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.499 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.500 --rc genhtml_branch_coverage=1 00:12:43.500 --rc genhtml_function_coverage=1 00:12:43.500 --rc genhtml_legend=1 00:12:43.500 --rc geninfo_all_blocks=1 00:12:43.500 --rc geninfo_unexecuted_blocks=1 00:12:43.500 00:12:43.500 ' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.500 --rc genhtml_branch_coverage=1 00:12:43.500 --rc genhtml_function_coverage=1 00:12:43.500 --rc genhtml_legend=1 00:12:43.500 --rc geninfo_all_blocks=1 00:12:43.500 --rc geninfo_unexecuted_blocks=1 00:12:43.500 00:12:43.500 ' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.500 --rc genhtml_branch_coverage=1 00:12:43.500 --rc genhtml_function_coverage=1 00:12:43.500 --rc genhtml_legend=1 00:12:43.500 --rc geninfo_all_blocks=1 00:12:43.500 --rc geninfo_unexecuted_blocks=1 00:12:43.500 00:12:43.500 ' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.500 --rc genhtml_branch_coverage=1 00:12:43.500 --rc genhtml_function_coverage=1 00:12:43.500 --rc genhtml_legend=1 00:12:43.500 --rc geninfo_all_blocks=1 00:12:43.500 --rc geninfo_unexecuted_blocks=1 00:12:43.500 00:12:43.500 ' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.500 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.501 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:48.863 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:48.864 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:48.864 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:48.864 Found net devices under 0000:86:00.0: cvl_0_0 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:48.864 Found net devices under 0000:86:00.1: cvl_0_1 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.864 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:49.124 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:49.124 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.124 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:49.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:12:49.124 00:12:49.124 --- 10.0.0.2 ping statistics --- 00:12:49.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.124 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:49.124 00:12:49.124 --- 10.0.0.1 ping statistics --- 00:12:49.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.124 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:49.124 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.125 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:49.125 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=381341 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 381341 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 381341 ']' 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.384 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.384 [2024-11-20 12:22:32.306090] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:12:49.384 [2024-11-20 12:22:32.306136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.384 [2024-11-20 12:22:32.387762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.384 [2024-11-20 12:22:32.430741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.384 [2024-11-20 12:22:32.430778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.384 [2024-11-20 12:22:32.430784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.384 [2024-11-20 12:22:32.430790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.384 [2024-11-20 12:22:32.430796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.385 [2024-11-20 12:22:32.432327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.385 [2024-11-20 12:22:32.432438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.385 [2024-11-20 12:22:32.432566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.385 [2024-11-20 12:22:32.432567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.644 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23987 00:12:49.644 [2024-11-20 12:22:32.738602] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:49.903 { 00:12:49.903 "nqn": "nqn.2016-06.io.spdk:cnode23987", 00:12:49.903 "tgt_name": "foobar", 00:12:49.903 "method": "nvmf_create_subsystem", 00:12:49.903 "req_id": 1 00:12:49.903 } 00:12:49.903 Got JSON-RPC error response 00:12:49.903 response: 00:12:49.903 { 00:12:49.903 "code": -32603, 00:12:49.903 "message": "Unable to find target foobar" 00:12:49.903 }' 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:49.903 { 00:12:49.903 "nqn": "nqn.2016-06.io.spdk:cnode23987", 00:12:49.903 "tgt_name": "foobar", 00:12:49.903 "method": "nvmf_create_subsystem", 00:12:49.903 "req_id": 1 00:12:49.903 } 00:12:49.903 Got JSON-RPC error response 00:12:49.903 response: 00:12:49.903 { 00:12:49.903 "code": -32603, 00:12:49.903 "message": "Unable to find target foobar" 00:12:49.903 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18468 00:12:49.903 [2024-11-20 12:22:32.943299] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18468: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:49.903 { 00:12:49.903 "nqn": "nqn.2016-06.io.spdk:cnode18468", 00:12:49.903 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:49.903 "method": "nvmf_create_subsystem", 00:12:49.903 "req_id": 1 00:12:49.903 } 00:12:49.903 Got JSON-RPC error response 00:12:49.903 response: 00:12:49.903 { 00:12:49.903 "code": -32602, 00:12:49.903 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:49.903 }' 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:49.903 { 00:12:49.903 "nqn": "nqn.2016-06.io.spdk:cnode18468", 00:12:49.903 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:49.903 "method": "nvmf_create_subsystem", 00:12:49.903 "req_id": 1 00:12:49.903 } 00:12:49.903 Got JSON-RPC error response 00:12:49.903 response: 00:12:49.903 { 00:12:49.903 "code": -32602, 00:12:49.903 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:49.903 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:49.903 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4056 00:12:50.163 [2024-11-20 12:22:33.143967] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4056: invalid model number 'SPDK_Controller' 00:12:50.163 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:50.163 { 00:12:50.163 "nqn": "nqn.2016-06.io.spdk:cnode4056", 00:12:50.163 "model_number": "SPDK_Controller\u001f", 00:12:50.163 "method": "nvmf_create_subsystem", 00:12:50.163 "req_id": 1 00:12:50.163 } 00:12:50.163 Got JSON-RPC error response 00:12:50.163 response: 00:12:50.163 { 00:12:50.163 "code": -32602, 00:12:50.163 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.163 }' 00:12:50.163 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:50.163 { 00:12:50.163 "nqn": "nqn.2016-06.io.spdk:cnode4056", 00:12:50.163 "model_number": "SPDK_Controller\u001f", 00:12:50.163 "method": "nvmf_create_subsystem", 00:12:50.163 "req_id": 1 00:12:50.163 } 00:12:50.163 Got JSON-RPC error response 00:12:50.163 response: 00:12:50.163 { 00:12:50.163 "code": -32602, 00:12:50.163 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.163 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.163 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:50.163 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:50.163 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.163 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:50.164 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'mqk"I?1qT\I%ChU%ZhC9B' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'mqk"I?1qT\I%ChU%ZhC9B' nqn.2016-06.io.spdk:cnode28569 00:12:50.424 [2024-11-20 12:22:33.497159] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28569: invalid serial number 'mqk"I?1qT\I%ChU%ZhC9B' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:50.424 { 00:12:50.424 "nqn": "nqn.2016-06.io.spdk:cnode28569", 00:12:50.424 "serial_number": "mqk\"I?1qT\\I%ChU%ZhC9B", 00:12:50.424 "method": "nvmf_create_subsystem", 00:12:50.424 "req_id": 1 00:12:50.424 } 00:12:50.424 Got JSON-RPC error response 00:12:50.424 response: 00:12:50.424 { 00:12:50.424 "code": -32602, 00:12:50.424 "message": "Invalid SN mqk\"I?1qT\\I%ChU%ZhC9B" 00:12:50.424 }' 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:50.424 { 00:12:50.424 "nqn": "nqn.2016-06.io.spdk:cnode28569", 00:12:50.424 "serial_number": "mqk\"I?1qT\\I%ChU%ZhC9B", 00:12:50.424 "method": "nvmf_create_subsystem", 00:12:50.424 "req_id": 1 00:12:50.424 } 00:12:50.424 Got JSON-RPC error response 00:12:50.424 response: 00:12:50.424 { 00:12:50.424 "code": -32602, 00:12:50.424 "message": "Invalid SN mqk\"I?1qT\\I%ChU%ZhC9B" 00:12:50.424 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.424 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.684 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:50.685 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z~G)pYdmP@K+PA["7@&q]S#a__oEK_^1 cL ~GI?x' 00:12:50.686 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'z~G)pYdmP@K+PA["7@&q]S#a__oEK_^1 cL ~GI?x' nqn.2016-06.io.spdk:cnode12730 00:12:50.945 [2024-11-20 12:22:33.970738] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12730: invalid model number 'z~G)pYdmP@K+PA["7@&q]S#a__oEK_^1 cL ~GI?x' 00:12:50.945 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:50.945 { 00:12:50.945 "nqn": "nqn.2016-06.io.spdk:cnode12730", 00:12:50.945 "model_number": "z~G)pYdmP@K+PA[\"7@&q]S#a__oEK_^1 cL ~GI?x", 00:12:50.945 "method": "nvmf_create_subsystem", 00:12:50.945 "req_id": 1 00:12:50.945 } 00:12:50.945 Got JSON-RPC error response 00:12:50.945 response: 00:12:50.945 { 00:12:50.945 "code": -32602, 00:12:50.945 "message": "Invalid MN z~G)pYdmP@K+PA[\"7@&q]S#a__oEK_^1 cL ~GI?x" 00:12:50.945 }' 00:12:50.945 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:50.945 { 00:12:50.945 "nqn": "nqn.2016-06.io.spdk:cnode12730", 00:12:50.945 "model_number": "z~G)pYdmP@K+PA[\"7@&q]S#a__oEK_^1 cL ~GI?x", 00:12:50.945 "method": "nvmf_create_subsystem", 00:12:50.945 "req_id": 1 00:12:50.945 } 00:12:50.945 Got JSON-RPC error response 00:12:50.945 response: 00:12:50.945 { 00:12:50.945 "code": -32602, 00:12:50.945 "message": "Invalid MN z~G)pYdmP@K+PA[\"7@&q]S#a__oEK_^1 cL ~GI?x" 00:12:50.945 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.945 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:51.205 [2024-11-20 12:22:34.171485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.205 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:51.464 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:51.464 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:51.464 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:51.464 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:51.464 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:51.464 [2024-11-20 12:22:34.580814] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:51.723 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:51.723 { 00:12:51.723 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.723 "listen_address": { 00:12:51.723 "trtype": "tcp", 00:12:51.723 "traddr": "", 00:12:51.723 "trsvcid": "4421" 00:12:51.723 }, 00:12:51.723 "method": "nvmf_subsystem_remove_listener", 00:12:51.723 "req_id": 1 00:12:51.723 } 00:12:51.723 Got JSON-RPC error response 00:12:51.723 response: 00:12:51.723 { 00:12:51.723 "code": -32602, 00:12:51.723 "message": "Invalid parameters" 00:12:51.723 }' 00:12:51.723 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:51.723 { 00:12:51.723 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.723 "listen_address": { 00:12:51.723 "trtype": "tcp", 00:12:51.723 "traddr": "", 00:12:51.723 "trsvcid": "4421" 00:12:51.723 }, 00:12:51.723 "method": "nvmf_subsystem_remove_listener", 00:12:51.723 "req_id": 1 00:12:51.723 } 00:12:51.724 Got JSON-RPC error response 00:12:51.724 response: 00:12:51.724 { 00:12:51.724 "code": -32602, 00:12:51.724 "message": "Invalid parameters" 00:12:51.724 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:51.724 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30263 -i 0 00:12:51.724 [2024-11-20 12:22:34.789495] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30263: invalid cntlid range [0-65519] 00:12:51.724 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:51.724 { 00:12:51.724 "nqn": "nqn.2016-06.io.spdk:cnode30263", 00:12:51.724 "min_cntlid": 0, 00:12:51.724 "method": "nvmf_create_subsystem", 00:12:51.724 "req_id": 1 00:12:51.724 } 00:12:51.724 Got JSON-RPC error response 00:12:51.724 response: 00:12:51.724 { 00:12:51.724 "code": -32602, 00:12:51.724 "message": "Invalid cntlid range [0-65519]" 00:12:51.724 }' 00:12:51.724 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:51.724 { 00:12:51.724 "nqn": "nqn.2016-06.io.spdk:cnode30263", 00:12:51.724 "min_cntlid": 0, 00:12:51.724 "method": "nvmf_create_subsystem", 00:12:51.724 "req_id": 1 00:12:51.724 } 00:12:51.724 Got JSON-RPC error response 00:12:51.724 response: 00:12:51.724 { 00:12:51.724 "code": -32602, 00:12:51.724 "message": "Invalid cntlid range [0-65519]" 00:12:51.724 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.724 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10320 -i 65520 00:12:51.982 [2024-11-20 12:22:35.002198] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10320: invalid cntlid range [65520-65519] 00:12:51.982 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:51.982 { 00:12:51.982 "nqn": "nqn.2016-06.io.spdk:cnode10320", 00:12:51.982 "min_cntlid": 65520, 00:12:51.982 "method": "nvmf_create_subsystem", 00:12:51.982 "req_id": 1 00:12:51.982 } 00:12:51.982 Got JSON-RPC error response 00:12:51.982 response: 00:12:51.983 { 00:12:51.983 "code": -32602, 00:12:51.983 "message": "Invalid cntlid range [65520-65519]" 00:12:51.983 }' 00:12:51.983 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:51.983 { 00:12:51.983 "nqn": "nqn.2016-06.io.spdk:cnode10320", 00:12:51.983 "min_cntlid": 65520, 00:12:51.983 "method": "nvmf_create_subsystem", 00:12:51.983 "req_id": 1 00:12:51.983 } 00:12:51.983 Got JSON-RPC error response 00:12:51.983 response: 00:12:51.983 { 00:12:51.983 "code": -32602, 00:12:51.983 "message": "Invalid cntlid range [65520-65519]" 00:12:51.983 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.983 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9936 -I 0 00:12:52.242 [2024-11-20 12:22:35.198889] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9936: invalid cntlid range [1-0] 00:12:52.242 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:52.242 { 00:12:52.242 "nqn": "nqn.2016-06.io.spdk:cnode9936", 00:12:52.242 "max_cntlid": 0, 00:12:52.242 "method": "nvmf_create_subsystem", 00:12:52.242 "req_id": 1 00:12:52.242 } 00:12:52.242 Got JSON-RPC error response 00:12:52.242 response: 00:12:52.242 { 00:12:52.242 "code": -32602, 00:12:52.242 "message": "Invalid cntlid range [1-0]" 00:12:52.242 }' 00:12:52.242 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:52.242 { 00:12:52.242 "nqn": "nqn.2016-06.io.spdk:cnode9936", 00:12:52.242 "max_cntlid": 0, 00:12:52.242 "method": "nvmf_create_subsystem", 00:12:52.242 "req_id": 1 00:12:52.242 } 00:12:52.242 Got JSON-RPC error response 00:12:52.242 response: 00:12:52.242 { 00:12:52.242 "code": -32602, 00:12:52.242 "message": "Invalid cntlid range [1-0]" 00:12:52.242 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.242 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24353 -I 65520 00:12:52.502 [2024-11-20 12:22:35.399570] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24353: invalid cntlid range [1-65520] 00:12:52.502 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:52.502 { 00:12:52.502 "nqn": "nqn.2016-06.io.spdk:cnode24353", 00:12:52.502 "max_cntlid": 65520, 00:12:52.502 "method": "nvmf_create_subsystem", 00:12:52.502 "req_id": 1 00:12:52.502 } 00:12:52.502 Got JSON-RPC error response 00:12:52.502 response: 00:12:52.502 { 00:12:52.502 "code": -32602, 00:12:52.502 "message": "Invalid cntlid range [1-65520]" 00:12:52.502 }' 00:12:52.502 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:52.502 { 00:12:52.502 "nqn": "nqn.2016-06.io.spdk:cnode24353", 00:12:52.502 "max_cntlid": 65520, 00:12:52.502 "method": "nvmf_create_subsystem", 00:12:52.502 "req_id": 1 00:12:52.502 } 00:12:52.502 Got JSON-RPC error response 00:12:52.502 response: 00:12:52.502 { 00:12:52.502 "code": -32602, 00:12:52.502 "message": "Invalid cntlid range [1-65520]" 00:12:52.502 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.502 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31923 -i 6 -I 5 00:12:52.502 [2024-11-20 12:22:35.596279] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31923: invalid cntlid range [6-5] 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:52.762 { 00:12:52.762 "nqn": "nqn.2016-06.io.spdk:cnode31923", 00:12:52.762 "min_cntlid": 6, 00:12:52.762 "max_cntlid": 5, 00:12:52.762 "method": "nvmf_create_subsystem", 00:12:52.762 "req_id": 1 00:12:52.762 } 00:12:52.762 Got JSON-RPC error response 00:12:52.762 response: 00:12:52.762 { 00:12:52.762 "code": -32602, 00:12:52.762 "message": "Invalid cntlid range [6-5]" 00:12:52.762 }' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:52.762 { 00:12:52.762 "nqn": "nqn.2016-06.io.spdk:cnode31923", 00:12:52.762 "min_cntlid": 6, 00:12:52.762 "max_cntlid": 5, 00:12:52.762 "method": "nvmf_create_subsystem", 00:12:52.762 "req_id": 1 00:12:52.762 } 00:12:52.762 Got JSON-RPC error response 00:12:52.762 response: 00:12:52.762 { 00:12:52.762 "code": -32602, 00:12:52.762 "message": "Invalid cntlid range [6-5]" 00:12:52.762 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:52.762 { 00:12:52.762 "name": "foobar", 00:12:52.762 "method": "nvmf_delete_target", 00:12:52.762 "req_id": 1 00:12:52.762 } 00:12:52.762 Got JSON-RPC error response 00:12:52.762 response: 00:12:52.762 { 00:12:52.762 "code": -32602, 00:12:52.762 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:52.762 }' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:52.762 { 00:12:52.762 "name": "foobar", 00:12:52.762 "method": "nvmf_delete_target", 00:12:52.762 "req_id": 1 00:12:52.762 } 00:12:52.762 Got JSON-RPC error response 00:12:52.762 response: 00:12:52.762 { 00:12:52.762 "code": -32602, 00:12:52.762 "message": "The specified target doesn't exist, cannot delete it." 00:12:52.762 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.762 rmmod nvme_tcp 00:12:52.762 rmmod nvme_fabrics 00:12:52.762 rmmod nvme_keyring 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 381341 ']' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 381341 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 381341 ']' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 381341 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 381341 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 381341' 00:12:52.762 killing process with pid 381341 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 381341 00:12:52.762 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 381341 00:12:53.023 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.023 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.024 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.024 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.024 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.562 00:12:55.562 real 0m12.008s 00:12:55.562 user 0m18.439s 00:12:55.562 sys 0m5.472s 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.562 ************************************ 00:12:55.562 END TEST nvmf_invalid 00:12:55.562 ************************************ 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.562 ************************************ 00:12:55.562 START TEST nvmf_connect_stress 00:12:55.562 ************************************ 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:55.562 * Looking for test storage... 00:12:55.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:55.562 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.563 --rc genhtml_branch_coverage=1 00:12:55.563 --rc genhtml_function_coverage=1 00:12:55.563 --rc genhtml_legend=1 00:12:55.563 --rc geninfo_all_blocks=1 00:12:55.563 --rc geninfo_unexecuted_blocks=1 00:12:55.563 00:12:55.563 ' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.563 --rc genhtml_branch_coverage=1 00:12:55.563 --rc genhtml_function_coverage=1 00:12:55.563 --rc genhtml_legend=1 00:12:55.563 --rc geninfo_all_blocks=1 00:12:55.563 --rc geninfo_unexecuted_blocks=1 00:12:55.563 00:12:55.563 ' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.563 --rc genhtml_branch_coverage=1 00:12:55.563 --rc genhtml_function_coverage=1 00:12:55.563 --rc genhtml_legend=1 00:12:55.563 --rc geninfo_all_blocks=1 00:12:55.563 --rc geninfo_unexecuted_blocks=1 00:12:55.563 00:12:55.563 ' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.563 --rc genhtml_branch_coverage=1 00:12:55.563 --rc genhtml_function_coverage=1 00:12:55.563 --rc genhtml_legend=1 00:12:55.563 --rc geninfo_all_blocks=1 00:12:55.563 --rc geninfo_unexecuted_blocks=1 00:12:55.563 00:12:55.563 ' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.563 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:02.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:02.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.134 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.134 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.134 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.134 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.134 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.134 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.134 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:02.135 Found net devices under 0000:86:00.0: cvl_0_0 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:02.135 Found net devices under 0000:86:00.1: cvl_0_1 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:13:02.135 00:13:02.135 --- 10.0.0.2 ping statistics --- 00:13:02.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.135 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:02.135 00:13:02.135 --- 10.0.0.1 ping statistics --- 00:13:02.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.135 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=385516 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 385516 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 385516 ']' 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.135 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 [2024-11-20 12:22:44.341318] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:02.135 [2024-11-20 12:22:44.341361] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.135 [2024-11-20 12:22:44.421175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.135 [2024-11-20 12:22:44.463615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.135 [2024-11-20 12:22:44.463649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.135 [2024-11-20 12:22:44.463655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.135 [2024-11-20 12:22:44.463661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.135 [2024-11-20 12:22:44.463666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.135 [2024-11-20 12:22:44.465135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.135 [2024-11-20 12:22:44.465242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.135 [2024-11-20 12:22:44.465243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 [2024-11-20 12:22:45.211583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 [2024-11-20 12:22:45.227740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.135 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.136 NULL1 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=385761 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.136 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.395 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.654 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.654 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:02.654 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.654 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.654 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.937 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.937 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:02.937 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.937 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.937 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.196 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.196 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:03.196 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.196 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.196 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.764 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.764 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:03.764 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.764 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.764 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.023 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.023 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:04.023 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.023 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.023 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.281 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.281 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:04.281 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.281 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.281 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.540 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.540 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:04.540 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.540 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.540 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.799 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.799 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:04.799 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.799 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.799 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.367 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.367 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:05.367 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.367 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.367 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.627 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.627 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:05.627 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.627 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.627 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.886 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.886 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:05.886 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.886 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.886 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.146 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.146 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:06.146 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.146 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.146 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.405 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.405 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:06.405 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.405 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.406 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.974 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.974 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:06.974 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.974 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.974 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.231 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.231 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:07.231 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.231 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.231 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.489 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.490 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:07.490 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.490 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.490 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.748 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.748 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:07.748 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.748 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.748 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.316 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.316 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:08.316 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.316 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.316 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.574 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.574 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:08.574 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.574 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.574 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.832 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.832 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:08.832 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.832 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.832 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.091 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.091 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:09.091 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.091 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.091 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.350 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.350 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:09.350 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.350 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.350 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:09.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.917 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.176 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.176 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:10.176 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.176 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.176 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.434 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.434 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:10.434 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.434 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.434 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.693 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.693 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:10.693 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.693 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.693 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.952 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.952 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:10.952 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.952 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.952 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.521 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.521 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:11.521 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.521 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.521 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.780 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.780 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:11.780 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.780 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.780 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.040 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.040 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:12.040 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.040 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.040 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.299 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 385761 00:13:12.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (385761) - No such process 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 385761 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.299 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.299 rmmod nvme_tcp 00:13:12.299 rmmod nvme_fabrics 00:13:12.299 rmmod nvme_keyring 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 385516 ']' 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 385516 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 385516 ']' 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 385516 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385516 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385516' 00:13:12.559 killing process with pid 385516 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 385516 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 385516 00:13:12.559 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.560 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.098 00:13:15.098 real 0m19.567s 00:13:15.098 user 0m41.173s 00:13:15.098 sys 0m8.580s 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.098 ************************************ 00:13:15.098 END TEST nvmf_connect_stress 00:13:15.098 ************************************ 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.098 ************************************ 00:13:15.098 START TEST nvmf_fused_ordering 00:13:15.098 ************************************ 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:15.098 * Looking for test storage... 00:13:15.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:15.098 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.099 --rc genhtml_branch_coverage=1 00:13:15.099 --rc genhtml_function_coverage=1 00:13:15.099 --rc genhtml_legend=1 00:13:15.099 --rc geninfo_all_blocks=1 00:13:15.099 --rc geninfo_unexecuted_blocks=1 00:13:15.099 00:13:15.099 ' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.099 --rc genhtml_branch_coverage=1 00:13:15.099 --rc genhtml_function_coverage=1 00:13:15.099 --rc genhtml_legend=1 00:13:15.099 --rc geninfo_all_blocks=1 00:13:15.099 --rc geninfo_unexecuted_blocks=1 00:13:15.099 00:13:15.099 ' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.099 --rc genhtml_branch_coverage=1 00:13:15.099 --rc genhtml_function_coverage=1 00:13:15.099 --rc genhtml_legend=1 00:13:15.099 --rc geninfo_all_blocks=1 00:13:15.099 --rc geninfo_unexecuted_blocks=1 00:13:15.099 00:13:15.099 ' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.099 --rc genhtml_branch_coverage=1 00:13:15.099 --rc genhtml_function_coverage=1 00:13:15.099 --rc genhtml_legend=1 00:13:15.099 --rc geninfo_all_blocks=1 00:13:15.099 --rc geninfo_unexecuted_blocks=1 00:13:15.099 00:13:15.099 ' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.099 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.099 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.672 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:21.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:21.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:21.673 Found net devices under 0000:86:00.0: cvl_0_0 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:21.673 Found net devices under 0000:86:00.1: cvl_0_1 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:13:21.673 00:13:21.673 --- 10.0.0.2 ping statistics --- 00:13:21.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.673 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:13:21.673 00:13:21.673 --- 10.0.0.1 ping statistics --- 00:13:21.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.673 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:21.673 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=390916 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 390916 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 390916 ']' 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.674 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 [2024-11-20 12:23:03.978275] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:21.674 [2024-11-20 12:23:03.978329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.674 [2024-11-20 12:23:04.057144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.674 [2024-11-20 12:23:04.098800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.674 [2024-11-20 12:23:04.098838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.674 [2024-11-20 12:23:04.098845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.674 [2024-11-20 12:23:04.098852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.674 [2024-11-20 12:23:04.098857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.674 [2024-11-20 12:23:04.099430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 [2024-11-20 12:23:04.234942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 [2024-11-20 12:23:04.255120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 NULL1 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.674 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:21.674 [2024-11-20 12:23:04.314880] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:21.674 [2024-11-20 12:23:04.314933] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391010 ] 00:13:21.674 Attached to nqn.2016-06.io.spdk:cnode1 00:13:21.674 Namespace ID: 1 size: 1GB 00:13:21.674 fused_ordering(0) 00:13:21.674 fused_ordering(1) 00:13:21.674 fused_ordering(2) 00:13:21.674 fused_ordering(3) 00:13:21.674 fused_ordering(4) 00:13:21.674 fused_ordering(5) 00:13:21.674 fused_ordering(6) 00:13:21.674 fused_ordering(7) 00:13:21.674 fused_ordering(8) 00:13:21.674 fused_ordering(9) 00:13:21.674 fused_ordering(10) 00:13:21.674 fused_ordering(11) 00:13:21.674 fused_ordering(12) 00:13:21.674 fused_ordering(13) 00:13:21.674 fused_ordering(14) 00:13:21.674 fused_ordering(15) 00:13:21.674 fused_ordering(16) 00:13:21.674 fused_ordering(17) 00:13:21.674 fused_ordering(18) 00:13:21.674 fused_ordering(19) 00:13:21.674 fused_ordering(20) 00:13:21.674 fused_ordering(21) 00:13:21.674 fused_ordering(22) 00:13:21.674 fused_ordering(23) 00:13:21.674 fused_ordering(24) 00:13:21.674 fused_ordering(25) 00:13:21.674 fused_ordering(26) 00:13:21.674 fused_ordering(27) 00:13:21.674 fused_ordering(28) 00:13:21.674 fused_ordering(29) 00:13:21.674 fused_ordering(30) 00:13:21.674 fused_ordering(31) 00:13:21.674 fused_ordering(32) 00:13:21.674 fused_ordering(33) 00:13:21.674 fused_ordering(34) 00:13:21.674 fused_ordering(35) 00:13:21.674 fused_ordering(36) 00:13:21.674 fused_ordering(37) 00:13:21.674 fused_ordering(38) 00:13:21.674 fused_ordering(39) 00:13:21.674 fused_ordering(40) 00:13:21.674 fused_ordering(41) 00:13:21.674 fused_ordering(42) 00:13:21.674 fused_ordering(43) 00:13:21.674 fused_ordering(44) 00:13:21.674 fused_ordering(45) 00:13:21.674 fused_ordering(46) 00:13:21.674 fused_ordering(47) 00:13:21.674 fused_ordering(48) 00:13:21.674 fused_ordering(49) 00:13:21.674 fused_ordering(50) 00:13:21.674 fused_ordering(51) 00:13:21.674 fused_ordering(52) 00:13:21.674 fused_ordering(53) 00:13:21.674 fused_ordering(54) 00:13:21.674 fused_ordering(55) 00:13:21.674 fused_ordering(56) 00:13:21.674 fused_ordering(57) 00:13:21.674 fused_ordering(58) 00:13:21.674 fused_ordering(59) 00:13:21.674 fused_ordering(60) 00:13:21.674 fused_ordering(61) 00:13:21.674 fused_ordering(62) 00:13:21.674 fused_ordering(63) 00:13:21.674 fused_ordering(64) 00:13:21.674 fused_ordering(65) 00:13:21.674 fused_ordering(66) 00:13:21.674 fused_ordering(67) 00:13:21.674 fused_ordering(68) 00:13:21.674 fused_ordering(69) 00:13:21.674 fused_ordering(70) 00:13:21.674 fused_ordering(71) 00:13:21.674 fused_ordering(72) 00:13:21.674 fused_ordering(73) 00:13:21.674 fused_ordering(74) 00:13:21.674 fused_ordering(75) 00:13:21.674 fused_ordering(76) 00:13:21.674 fused_ordering(77) 00:13:21.674 fused_ordering(78) 00:13:21.674 fused_ordering(79) 00:13:21.674 fused_ordering(80) 00:13:21.674 fused_ordering(81) 00:13:21.674 fused_ordering(82) 00:13:21.674 fused_ordering(83) 00:13:21.674 fused_ordering(84) 00:13:21.674 fused_ordering(85) 00:13:21.674 fused_ordering(86) 00:13:21.674 fused_ordering(87) 00:13:21.674 fused_ordering(88) 00:13:21.674 fused_ordering(89) 00:13:21.674 fused_ordering(90) 00:13:21.674 fused_ordering(91) 00:13:21.674 fused_ordering(92) 00:13:21.674 fused_ordering(93) 00:13:21.674 fused_ordering(94) 00:13:21.674 fused_ordering(95) 00:13:21.674 fused_ordering(96) 00:13:21.674 fused_ordering(97) 00:13:21.675 fused_ordering(98) 00:13:21.675 fused_ordering(99) 00:13:21.675 fused_ordering(100) 00:13:21.675 fused_ordering(101) 00:13:21.675 fused_ordering(102) 00:13:21.675 fused_ordering(103) 00:13:21.675 fused_ordering(104) 00:13:21.675 fused_ordering(105) 00:13:21.675 fused_ordering(106) 00:13:21.675 fused_ordering(107) 00:13:21.675 fused_ordering(108) 00:13:21.675 fused_ordering(109) 00:13:21.675 fused_ordering(110) 00:13:21.675 fused_ordering(111) 00:13:21.675 fused_ordering(112) 00:13:21.675 fused_ordering(113) 00:13:21.675 fused_ordering(114) 00:13:21.675 fused_ordering(115) 00:13:21.675 fused_ordering(116) 00:13:21.675 fused_ordering(117) 00:13:21.675 fused_ordering(118) 00:13:21.675 fused_ordering(119) 00:13:21.675 fused_ordering(120) 00:13:21.675 fused_ordering(121) 00:13:21.675 fused_ordering(122) 00:13:21.675 fused_ordering(123) 00:13:21.675 fused_ordering(124) 00:13:21.675 fused_ordering(125) 00:13:21.675 fused_ordering(126) 00:13:21.675 fused_ordering(127) 00:13:21.675 fused_ordering(128) 00:13:21.675 fused_ordering(129) 00:13:21.675 fused_ordering(130) 00:13:21.675 fused_ordering(131) 00:13:21.675 fused_ordering(132) 00:13:21.675 fused_ordering(133) 00:13:21.675 fused_ordering(134) 00:13:21.675 fused_ordering(135) 00:13:21.675 fused_ordering(136) 00:13:21.675 fused_ordering(137) 00:13:21.675 fused_ordering(138) 00:13:21.675 fused_ordering(139) 00:13:21.675 fused_ordering(140) 00:13:21.675 fused_ordering(141) 00:13:21.675 fused_ordering(142) 00:13:21.675 fused_ordering(143) 00:13:21.675 fused_ordering(144) 00:13:21.675 fused_ordering(145) 00:13:21.675 fused_ordering(146) 00:13:21.675 fused_ordering(147) 00:13:21.675 fused_ordering(148) 00:13:21.675 fused_ordering(149) 00:13:21.675 fused_ordering(150) 00:13:21.675 fused_ordering(151) 00:13:21.675 fused_ordering(152) 00:13:21.675 fused_ordering(153) 00:13:21.675 fused_ordering(154) 00:13:21.675 fused_ordering(155) 00:13:21.675 fused_ordering(156) 00:13:21.675 fused_ordering(157) 00:13:21.675 fused_ordering(158) 00:13:21.675 fused_ordering(159) 00:13:21.675 fused_ordering(160) 00:13:21.675 fused_ordering(161) 00:13:21.675 fused_ordering(162) 00:13:21.675 fused_ordering(163) 00:13:21.675 fused_ordering(164) 00:13:21.675 fused_ordering(165) 00:13:21.675 fused_ordering(166) 00:13:21.675 fused_ordering(167) 00:13:21.675 fused_ordering(168) 00:13:21.675 fused_ordering(169) 00:13:21.675 fused_ordering(170) 00:13:21.675 fused_ordering(171) 00:13:21.675 fused_ordering(172) 00:13:21.675 fused_ordering(173) 00:13:21.675 fused_ordering(174) 00:13:21.675 fused_ordering(175) 00:13:21.675 fused_ordering(176) 00:13:21.675 fused_ordering(177) 00:13:21.675 fused_ordering(178) 00:13:21.675 fused_ordering(179) 00:13:21.675 fused_ordering(180) 00:13:21.675 fused_ordering(181) 00:13:21.675 fused_ordering(182) 00:13:21.675 fused_ordering(183) 00:13:21.675 fused_ordering(184) 00:13:21.675 fused_ordering(185) 00:13:21.675 fused_ordering(186) 00:13:21.675 fused_ordering(187) 00:13:21.675 fused_ordering(188) 00:13:21.675 fused_ordering(189) 00:13:21.675 fused_ordering(190) 00:13:21.675 fused_ordering(191) 00:13:21.675 fused_ordering(192) 00:13:21.675 fused_ordering(193) 00:13:21.675 fused_ordering(194) 00:13:21.675 fused_ordering(195) 00:13:21.675 fused_ordering(196) 00:13:21.675 fused_ordering(197) 00:13:21.675 fused_ordering(198) 00:13:21.675 fused_ordering(199) 00:13:21.675 fused_ordering(200) 00:13:21.675 fused_ordering(201) 00:13:21.675 fused_ordering(202) 00:13:21.675 fused_ordering(203) 00:13:21.675 fused_ordering(204) 00:13:21.675 fused_ordering(205) 00:13:21.934 fused_ordering(206) 00:13:21.934 fused_ordering(207) 00:13:21.934 fused_ordering(208) 00:13:21.934 fused_ordering(209) 00:13:21.934 fused_ordering(210) 00:13:21.934 fused_ordering(211) 00:13:21.934 fused_ordering(212) 00:13:21.934 fused_ordering(213) 00:13:21.934 fused_ordering(214) 00:13:21.934 fused_ordering(215) 00:13:21.934 fused_ordering(216) 00:13:21.934 fused_ordering(217) 00:13:21.934 fused_ordering(218) 00:13:21.934 fused_ordering(219) 00:13:21.934 fused_ordering(220) 00:13:21.934 fused_ordering(221) 00:13:21.934 fused_ordering(222) 00:13:21.934 fused_ordering(223) 00:13:21.934 fused_ordering(224) 00:13:21.934 fused_ordering(225) 00:13:21.934 fused_ordering(226) 00:13:21.934 fused_ordering(227) 00:13:21.934 fused_ordering(228) 00:13:21.934 fused_ordering(229) 00:13:21.934 fused_ordering(230) 00:13:21.934 fused_ordering(231) 00:13:21.934 fused_ordering(232) 00:13:21.934 fused_ordering(233) 00:13:21.934 fused_ordering(234) 00:13:21.934 fused_ordering(235) 00:13:21.934 fused_ordering(236) 00:13:21.934 fused_ordering(237) 00:13:21.934 fused_ordering(238) 00:13:21.934 fused_ordering(239) 00:13:21.934 fused_ordering(240) 00:13:21.934 fused_ordering(241) 00:13:21.934 fused_ordering(242) 00:13:21.934 fused_ordering(243) 00:13:21.934 fused_ordering(244) 00:13:21.934 fused_ordering(245) 00:13:21.934 fused_ordering(246) 00:13:21.934 fused_ordering(247) 00:13:21.934 fused_ordering(248) 00:13:21.934 fused_ordering(249) 00:13:21.934 fused_ordering(250) 00:13:21.934 fused_ordering(251) 00:13:21.935 fused_ordering(252) 00:13:21.935 fused_ordering(253) 00:13:21.935 fused_ordering(254) 00:13:21.935 fused_ordering(255) 00:13:21.935 fused_ordering(256) 00:13:21.935 fused_ordering(257) 00:13:21.935 fused_ordering(258) 00:13:21.935 fused_ordering(259) 00:13:21.935 fused_ordering(260) 00:13:21.935 fused_ordering(261) 00:13:21.935 fused_ordering(262) 00:13:21.935 fused_ordering(263) 00:13:21.935 fused_ordering(264) 00:13:21.935 fused_ordering(265) 00:13:21.935 fused_ordering(266) 00:13:21.935 fused_ordering(267) 00:13:21.935 fused_ordering(268) 00:13:21.935 fused_ordering(269) 00:13:21.935 fused_ordering(270) 00:13:21.935 fused_ordering(271) 00:13:21.935 fused_ordering(272) 00:13:21.935 fused_ordering(273) 00:13:21.935 fused_ordering(274) 00:13:21.935 fused_ordering(275) 00:13:21.935 fused_ordering(276) 00:13:21.935 fused_ordering(277) 00:13:21.935 fused_ordering(278) 00:13:21.935 fused_ordering(279) 00:13:21.935 fused_ordering(280) 00:13:21.935 fused_ordering(281) 00:13:21.935 fused_ordering(282) 00:13:21.935 fused_ordering(283) 00:13:21.935 fused_ordering(284) 00:13:21.935 fused_ordering(285) 00:13:21.935 fused_ordering(286) 00:13:21.935 fused_ordering(287) 00:13:21.935 fused_ordering(288) 00:13:21.935 fused_ordering(289) 00:13:21.935 fused_ordering(290) 00:13:21.935 fused_ordering(291) 00:13:21.935 fused_ordering(292) 00:13:21.935 fused_ordering(293) 00:13:21.935 fused_ordering(294) 00:13:21.935 fused_ordering(295) 00:13:21.935 fused_ordering(296) 00:13:21.935 fused_ordering(297) 00:13:21.935 fused_ordering(298) 00:13:21.935 fused_ordering(299) 00:13:21.935 fused_ordering(300) 00:13:21.935 fused_ordering(301) 00:13:21.935 fused_ordering(302) 00:13:21.935 fused_ordering(303) 00:13:21.935 fused_ordering(304) 00:13:21.935 fused_ordering(305) 00:13:21.935 fused_ordering(306) 00:13:21.935 fused_ordering(307) 00:13:21.935 fused_ordering(308) 00:13:21.935 fused_ordering(309) 00:13:21.935 fused_ordering(310) 00:13:21.935 fused_ordering(311) 00:13:21.935 fused_ordering(312) 00:13:21.935 fused_ordering(313) 00:13:21.935 fused_ordering(314) 00:13:21.935 fused_ordering(315) 00:13:21.935 fused_ordering(316) 00:13:21.935 fused_ordering(317) 00:13:21.935 fused_ordering(318) 00:13:21.935 fused_ordering(319) 00:13:21.935 fused_ordering(320) 00:13:21.935 fused_ordering(321) 00:13:21.935 fused_ordering(322) 00:13:21.935 fused_ordering(323) 00:13:21.935 fused_ordering(324) 00:13:21.935 fused_ordering(325) 00:13:21.935 fused_ordering(326) 00:13:21.935 fused_ordering(327) 00:13:21.935 fused_ordering(328) 00:13:21.935 fused_ordering(329) 00:13:21.935 fused_ordering(330) 00:13:21.935 fused_ordering(331) 00:13:21.935 fused_ordering(332) 00:13:21.935 fused_ordering(333) 00:13:21.935 fused_ordering(334) 00:13:21.935 fused_ordering(335) 00:13:21.935 fused_ordering(336) 00:13:21.935 fused_ordering(337) 00:13:21.935 fused_ordering(338) 00:13:21.935 fused_ordering(339) 00:13:21.935 fused_ordering(340) 00:13:21.935 fused_ordering(341) 00:13:21.935 fused_ordering(342) 00:13:21.935 fused_ordering(343) 00:13:21.935 fused_ordering(344) 00:13:21.935 fused_ordering(345) 00:13:21.935 fused_ordering(346) 00:13:21.935 fused_ordering(347) 00:13:21.935 fused_ordering(348) 00:13:21.935 fused_ordering(349) 00:13:21.935 fused_ordering(350) 00:13:21.935 fused_ordering(351) 00:13:21.935 fused_ordering(352) 00:13:21.935 fused_ordering(353) 00:13:21.935 fused_ordering(354) 00:13:21.935 fused_ordering(355) 00:13:21.935 fused_ordering(356) 00:13:21.935 fused_ordering(357) 00:13:21.935 fused_ordering(358) 00:13:21.935 fused_ordering(359) 00:13:21.935 fused_ordering(360) 00:13:21.935 fused_ordering(361) 00:13:21.935 fused_ordering(362) 00:13:21.935 fused_ordering(363) 00:13:21.935 fused_ordering(364) 00:13:21.935 fused_ordering(365) 00:13:21.935 fused_ordering(366) 00:13:21.935 fused_ordering(367) 00:13:21.935 fused_ordering(368) 00:13:21.935 fused_ordering(369) 00:13:21.935 fused_ordering(370) 00:13:21.935 fused_ordering(371) 00:13:21.935 fused_ordering(372) 00:13:21.935 fused_ordering(373) 00:13:21.935 fused_ordering(374) 00:13:21.935 fused_ordering(375) 00:13:21.935 fused_ordering(376) 00:13:21.935 fused_ordering(377) 00:13:21.935 fused_ordering(378) 00:13:21.935 fused_ordering(379) 00:13:21.935 fused_ordering(380) 00:13:21.935 fused_ordering(381) 00:13:21.935 fused_ordering(382) 00:13:21.935 fused_ordering(383) 00:13:21.935 fused_ordering(384) 00:13:21.935 fused_ordering(385) 00:13:21.935 fused_ordering(386) 00:13:21.935 fused_ordering(387) 00:13:21.935 fused_ordering(388) 00:13:21.935 fused_ordering(389) 00:13:21.935 fused_ordering(390) 00:13:21.935 fused_ordering(391) 00:13:21.935 fused_ordering(392) 00:13:21.935 fused_ordering(393) 00:13:21.935 fused_ordering(394) 00:13:21.935 fused_ordering(395) 00:13:21.935 fused_ordering(396) 00:13:21.935 fused_ordering(397) 00:13:21.935 fused_ordering(398) 00:13:21.935 fused_ordering(399) 00:13:21.935 fused_ordering(400) 00:13:21.935 fused_ordering(401) 00:13:21.935 fused_ordering(402) 00:13:21.935 fused_ordering(403) 00:13:21.935 fused_ordering(404) 00:13:21.935 fused_ordering(405) 00:13:21.935 fused_ordering(406) 00:13:21.935 fused_ordering(407) 00:13:21.935 fused_ordering(408) 00:13:21.935 fused_ordering(409) 00:13:21.935 fused_ordering(410) 00:13:22.196 fused_ordering(411) 00:13:22.196 fused_ordering(412) 00:13:22.196 fused_ordering(413) 00:13:22.196 fused_ordering(414) 00:13:22.196 fused_ordering(415) 00:13:22.196 fused_ordering(416) 00:13:22.196 fused_ordering(417) 00:13:22.196 fused_ordering(418) 00:13:22.196 fused_ordering(419) 00:13:22.196 fused_ordering(420) 00:13:22.196 fused_ordering(421) 00:13:22.196 fused_ordering(422) 00:13:22.196 fused_ordering(423) 00:13:22.196 fused_ordering(424) 00:13:22.196 fused_ordering(425) 00:13:22.196 fused_ordering(426) 00:13:22.196 fused_ordering(427) 00:13:22.196 fused_ordering(428) 00:13:22.196 fused_ordering(429) 00:13:22.196 fused_ordering(430) 00:13:22.196 fused_ordering(431) 00:13:22.196 fused_ordering(432) 00:13:22.196 fused_ordering(433) 00:13:22.196 fused_ordering(434) 00:13:22.196 fused_ordering(435) 00:13:22.196 fused_ordering(436) 00:13:22.196 fused_ordering(437) 00:13:22.196 fused_ordering(438) 00:13:22.196 fused_ordering(439) 00:13:22.196 fused_ordering(440) 00:13:22.196 fused_ordering(441) 00:13:22.196 fused_ordering(442) 00:13:22.196 fused_ordering(443) 00:13:22.196 fused_ordering(444) 00:13:22.196 fused_ordering(445) 00:13:22.196 fused_ordering(446) 00:13:22.196 fused_ordering(447) 00:13:22.196 fused_ordering(448) 00:13:22.196 fused_ordering(449) 00:13:22.196 fused_ordering(450) 00:13:22.196 fused_ordering(451) 00:13:22.196 fused_ordering(452) 00:13:22.196 fused_ordering(453) 00:13:22.196 fused_ordering(454) 00:13:22.196 fused_ordering(455) 00:13:22.196 fused_ordering(456) 00:13:22.196 fused_ordering(457) 00:13:22.196 fused_ordering(458) 00:13:22.196 fused_ordering(459) 00:13:22.196 fused_ordering(460) 00:13:22.196 fused_ordering(461) 00:13:22.196 fused_ordering(462) 00:13:22.196 fused_ordering(463) 00:13:22.196 fused_ordering(464) 00:13:22.196 fused_ordering(465) 00:13:22.196 fused_ordering(466) 00:13:22.196 fused_ordering(467) 00:13:22.196 fused_ordering(468) 00:13:22.196 fused_ordering(469) 00:13:22.196 fused_ordering(470) 00:13:22.197 fused_ordering(471) 00:13:22.197 fused_ordering(472) 00:13:22.197 fused_ordering(473) 00:13:22.197 fused_ordering(474) 00:13:22.197 fused_ordering(475) 00:13:22.197 fused_ordering(476) 00:13:22.197 fused_ordering(477) 00:13:22.197 fused_ordering(478) 00:13:22.197 fused_ordering(479) 00:13:22.197 fused_ordering(480) 00:13:22.197 fused_ordering(481) 00:13:22.197 fused_ordering(482) 00:13:22.197 fused_ordering(483) 00:13:22.197 fused_ordering(484) 00:13:22.197 fused_ordering(485) 00:13:22.197 fused_ordering(486) 00:13:22.197 fused_ordering(487) 00:13:22.197 fused_ordering(488) 00:13:22.197 fused_ordering(489) 00:13:22.197 fused_ordering(490) 00:13:22.197 fused_ordering(491) 00:13:22.197 fused_ordering(492) 00:13:22.197 fused_ordering(493) 00:13:22.197 fused_ordering(494) 00:13:22.197 fused_ordering(495) 00:13:22.197 fused_ordering(496) 00:13:22.197 fused_ordering(497) 00:13:22.197 fused_ordering(498) 00:13:22.197 fused_ordering(499) 00:13:22.197 fused_ordering(500) 00:13:22.197 fused_ordering(501) 00:13:22.197 fused_ordering(502) 00:13:22.197 fused_ordering(503) 00:13:22.197 fused_ordering(504) 00:13:22.197 fused_ordering(505) 00:13:22.197 fused_ordering(506) 00:13:22.197 fused_ordering(507) 00:13:22.197 fused_ordering(508) 00:13:22.197 fused_ordering(509) 00:13:22.197 fused_ordering(510) 00:13:22.197 fused_ordering(511) 00:13:22.197 fused_ordering(512) 00:13:22.197 fused_ordering(513) 00:13:22.197 fused_ordering(514) 00:13:22.197 fused_ordering(515) 00:13:22.197 fused_ordering(516) 00:13:22.197 fused_ordering(517) 00:13:22.197 fused_ordering(518) 00:13:22.197 fused_ordering(519) 00:13:22.197 fused_ordering(520) 00:13:22.197 fused_ordering(521) 00:13:22.197 fused_ordering(522) 00:13:22.197 fused_ordering(523) 00:13:22.197 fused_ordering(524) 00:13:22.197 fused_ordering(525) 00:13:22.197 fused_ordering(526) 00:13:22.197 fused_ordering(527) 00:13:22.197 fused_ordering(528) 00:13:22.197 fused_ordering(529) 00:13:22.197 fused_ordering(530) 00:13:22.197 fused_ordering(531) 00:13:22.197 fused_ordering(532) 00:13:22.197 fused_ordering(533) 00:13:22.197 fused_ordering(534) 00:13:22.197 fused_ordering(535) 00:13:22.197 fused_ordering(536) 00:13:22.197 fused_ordering(537) 00:13:22.197 fused_ordering(538) 00:13:22.197 fused_ordering(539) 00:13:22.197 fused_ordering(540) 00:13:22.197 fused_ordering(541) 00:13:22.197 fused_ordering(542) 00:13:22.197 fused_ordering(543) 00:13:22.197 fused_ordering(544) 00:13:22.197 fused_ordering(545) 00:13:22.197 fused_ordering(546) 00:13:22.197 fused_ordering(547) 00:13:22.197 fused_ordering(548) 00:13:22.197 fused_ordering(549) 00:13:22.197 fused_ordering(550) 00:13:22.197 fused_ordering(551) 00:13:22.197 fused_ordering(552) 00:13:22.197 fused_ordering(553) 00:13:22.197 fused_ordering(554) 00:13:22.197 fused_ordering(555) 00:13:22.197 fused_ordering(556) 00:13:22.197 fused_ordering(557) 00:13:22.197 fused_ordering(558) 00:13:22.197 fused_ordering(559) 00:13:22.197 fused_ordering(560) 00:13:22.197 fused_ordering(561) 00:13:22.197 fused_ordering(562) 00:13:22.197 fused_ordering(563) 00:13:22.197 fused_ordering(564) 00:13:22.197 fused_ordering(565) 00:13:22.197 fused_ordering(566) 00:13:22.197 fused_ordering(567) 00:13:22.197 fused_ordering(568) 00:13:22.197 fused_ordering(569) 00:13:22.197 fused_ordering(570) 00:13:22.197 fused_ordering(571) 00:13:22.197 fused_ordering(572) 00:13:22.197 fused_ordering(573) 00:13:22.197 fused_ordering(574) 00:13:22.197 fused_ordering(575) 00:13:22.197 fused_ordering(576) 00:13:22.197 fused_ordering(577) 00:13:22.197 fused_ordering(578) 00:13:22.197 fused_ordering(579) 00:13:22.197 fused_ordering(580) 00:13:22.197 fused_ordering(581) 00:13:22.197 fused_ordering(582) 00:13:22.197 fused_ordering(583) 00:13:22.197 fused_ordering(584) 00:13:22.197 fused_ordering(585) 00:13:22.197 fused_ordering(586) 00:13:22.197 fused_ordering(587) 00:13:22.197 fused_ordering(588) 00:13:22.197 fused_ordering(589) 00:13:22.197 fused_ordering(590) 00:13:22.197 fused_ordering(591) 00:13:22.197 fused_ordering(592) 00:13:22.197 fused_ordering(593) 00:13:22.197 fused_ordering(594) 00:13:22.197 fused_ordering(595) 00:13:22.197 fused_ordering(596) 00:13:22.197 fused_ordering(597) 00:13:22.197 fused_ordering(598) 00:13:22.197 fused_ordering(599) 00:13:22.197 fused_ordering(600) 00:13:22.197 fused_ordering(601) 00:13:22.197 fused_ordering(602) 00:13:22.197 fused_ordering(603) 00:13:22.197 fused_ordering(604) 00:13:22.197 fused_ordering(605) 00:13:22.197 fused_ordering(606) 00:13:22.197 fused_ordering(607) 00:13:22.197 fused_ordering(608) 00:13:22.197 fused_ordering(609) 00:13:22.197 fused_ordering(610) 00:13:22.197 fused_ordering(611) 00:13:22.197 fused_ordering(612) 00:13:22.197 fused_ordering(613) 00:13:22.197 fused_ordering(614) 00:13:22.197 fused_ordering(615) 00:13:22.766 fused_ordering(616) 00:13:22.766 fused_ordering(617) 00:13:22.766 fused_ordering(618) 00:13:22.766 fused_ordering(619) 00:13:22.766 fused_ordering(620) 00:13:22.766 fused_ordering(621) 00:13:22.766 fused_ordering(622) 00:13:22.766 fused_ordering(623) 00:13:22.766 fused_ordering(624) 00:13:22.766 fused_ordering(625) 00:13:22.766 fused_ordering(626) 00:13:22.766 fused_ordering(627) 00:13:22.766 fused_ordering(628) 00:13:22.766 fused_ordering(629) 00:13:22.766 fused_ordering(630) 00:13:22.766 fused_ordering(631) 00:13:22.766 fused_ordering(632) 00:13:22.766 fused_ordering(633) 00:13:22.766 fused_ordering(634) 00:13:22.766 fused_ordering(635) 00:13:22.766 fused_ordering(636) 00:13:22.766 fused_ordering(637) 00:13:22.766 fused_ordering(638) 00:13:22.766 fused_ordering(639) 00:13:22.766 fused_ordering(640) 00:13:22.766 fused_ordering(641) 00:13:22.766 fused_ordering(642) 00:13:22.766 fused_ordering(643) 00:13:22.766 fused_ordering(644) 00:13:22.766 fused_ordering(645) 00:13:22.766 fused_ordering(646) 00:13:22.766 fused_ordering(647) 00:13:22.766 fused_ordering(648) 00:13:22.766 fused_ordering(649) 00:13:22.766 fused_ordering(650) 00:13:22.766 fused_ordering(651) 00:13:22.766 fused_ordering(652) 00:13:22.766 fused_ordering(653) 00:13:22.766 fused_ordering(654) 00:13:22.766 fused_ordering(655) 00:13:22.766 fused_ordering(656) 00:13:22.766 fused_ordering(657) 00:13:22.766 fused_ordering(658) 00:13:22.766 fused_ordering(659) 00:13:22.766 fused_ordering(660) 00:13:22.766 fused_ordering(661) 00:13:22.766 fused_ordering(662) 00:13:22.766 fused_ordering(663) 00:13:22.766 fused_ordering(664) 00:13:22.766 fused_ordering(665) 00:13:22.766 fused_ordering(666) 00:13:22.766 fused_ordering(667) 00:13:22.766 fused_ordering(668) 00:13:22.766 fused_ordering(669) 00:13:22.766 fused_ordering(670) 00:13:22.766 fused_ordering(671) 00:13:22.766 fused_ordering(672) 00:13:22.766 fused_ordering(673) 00:13:22.766 fused_ordering(674) 00:13:22.766 fused_ordering(675) 00:13:22.766 fused_ordering(676) 00:13:22.766 fused_ordering(677) 00:13:22.766 fused_ordering(678) 00:13:22.766 fused_ordering(679) 00:13:22.766 fused_ordering(680) 00:13:22.766 fused_ordering(681) 00:13:22.766 fused_ordering(682) 00:13:22.766 fused_ordering(683) 00:13:22.766 fused_ordering(684) 00:13:22.766 fused_ordering(685) 00:13:22.766 fused_ordering(686) 00:13:22.766 fused_ordering(687) 00:13:22.766 fused_ordering(688) 00:13:22.766 fused_ordering(689) 00:13:22.766 fused_ordering(690) 00:13:22.766 fused_ordering(691) 00:13:22.766 fused_ordering(692) 00:13:22.766 fused_ordering(693) 00:13:22.766 fused_ordering(694) 00:13:22.766 fused_ordering(695) 00:13:22.766 fused_ordering(696) 00:13:22.766 fused_ordering(697) 00:13:22.766 fused_ordering(698) 00:13:22.766 fused_ordering(699) 00:13:22.766 fused_ordering(700) 00:13:22.766 fused_ordering(701) 00:13:22.766 fused_ordering(702) 00:13:22.766 fused_ordering(703) 00:13:22.766 fused_ordering(704) 00:13:22.766 fused_ordering(705) 00:13:22.766 fused_ordering(706) 00:13:22.766 fused_ordering(707) 00:13:22.766 fused_ordering(708) 00:13:22.766 fused_ordering(709) 00:13:22.766 fused_ordering(710) 00:13:22.766 fused_ordering(711) 00:13:22.766 fused_ordering(712) 00:13:22.766 fused_ordering(713) 00:13:22.766 fused_ordering(714) 00:13:22.766 fused_ordering(715) 00:13:22.766 fused_ordering(716) 00:13:22.766 fused_ordering(717) 00:13:22.766 fused_ordering(718) 00:13:22.766 fused_ordering(719) 00:13:22.766 fused_ordering(720) 00:13:22.766 fused_ordering(721) 00:13:22.766 fused_ordering(722) 00:13:22.766 fused_ordering(723) 00:13:22.766 fused_ordering(724) 00:13:22.766 fused_ordering(725) 00:13:22.766 fused_ordering(726) 00:13:22.766 fused_ordering(727) 00:13:22.766 fused_ordering(728) 00:13:22.766 fused_ordering(729) 00:13:22.766 fused_ordering(730) 00:13:22.766 fused_ordering(731) 00:13:22.766 fused_ordering(732) 00:13:22.766 fused_ordering(733) 00:13:22.766 fused_ordering(734) 00:13:22.766 fused_ordering(735) 00:13:22.766 fused_ordering(736) 00:13:22.766 fused_ordering(737) 00:13:22.766 fused_ordering(738) 00:13:22.766 fused_ordering(739) 00:13:22.766 fused_ordering(740) 00:13:22.766 fused_ordering(741) 00:13:22.766 fused_ordering(742) 00:13:22.766 fused_ordering(743) 00:13:22.766 fused_ordering(744) 00:13:22.766 fused_ordering(745) 00:13:22.766 fused_ordering(746) 00:13:22.767 fused_ordering(747) 00:13:22.767 fused_ordering(748) 00:13:22.767 fused_ordering(749) 00:13:22.767 fused_ordering(750) 00:13:22.767 fused_ordering(751) 00:13:22.767 fused_ordering(752) 00:13:22.767 fused_ordering(753) 00:13:22.767 fused_ordering(754) 00:13:22.767 fused_ordering(755) 00:13:22.767 fused_ordering(756) 00:13:22.767 fused_ordering(757) 00:13:22.767 fused_ordering(758) 00:13:22.767 fused_ordering(759) 00:13:22.767 fused_ordering(760) 00:13:22.767 fused_ordering(761) 00:13:22.767 fused_ordering(762) 00:13:22.767 fused_ordering(763) 00:13:22.767 fused_ordering(764) 00:13:22.767 fused_ordering(765) 00:13:22.767 fused_ordering(766) 00:13:22.767 fused_ordering(767) 00:13:22.767 fused_ordering(768) 00:13:22.767 fused_ordering(769) 00:13:22.767 fused_ordering(770) 00:13:22.767 fused_ordering(771) 00:13:22.767 fused_ordering(772) 00:13:22.767 fused_ordering(773) 00:13:22.767 fused_ordering(774) 00:13:22.767 fused_ordering(775) 00:13:22.767 fused_ordering(776) 00:13:22.767 fused_ordering(777) 00:13:22.767 fused_ordering(778) 00:13:22.767 fused_ordering(779) 00:13:22.767 fused_ordering(780) 00:13:22.767 fused_ordering(781) 00:13:22.767 fused_ordering(782) 00:13:22.767 fused_ordering(783) 00:13:22.767 fused_ordering(784) 00:13:22.767 fused_ordering(785) 00:13:22.767 fused_ordering(786) 00:13:22.767 fused_ordering(787) 00:13:22.767 fused_ordering(788) 00:13:22.767 fused_ordering(789) 00:13:22.767 fused_ordering(790) 00:13:22.767 fused_ordering(791) 00:13:22.767 fused_ordering(792) 00:13:22.767 fused_ordering(793) 00:13:22.767 fused_ordering(794) 00:13:22.767 fused_ordering(795) 00:13:22.767 fused_ordering(796) 00:13:22.767 fused_ordering(797) 00:13:22.767 fused_ordering(798) 00:13:22.767 fused_ordering(799) 00:13:22.767 fused_ordering(800) 00:13:22.767 fused_ordering(801) 00:13:22.767 fused_ordering(802) 00:13:22.767 fused_ordering(803) 00:13:22.767 fused_ordering(804) 00:13:22.767 fused_ordering(805) 00:13:22.767 fused_ordering(806) 00:13:22.767 fused_ordering(807) 00:13:22.767 fused_ordering(808) 00:13:22.767 fused_ordering(809) 00:13:22.767 fused_ordering(810) 00:13:22.767 fused_ordering(811) 00:13:22.767 fused_ordering(812) 00:13:22.767 fused_ordering(813) 00:13:22.767 fused_ordering(814) 00:13:22.767 fused_ordering(815) 00:13:22.767 fused_ordering(816) 00:13:22.767 fused_ordering(817) 00:13:22.767 fused_ordering(818) 00:13:22.767 fused_ordering(819) 00:13:22.767 fused_ordering(820) 00:13:23.027 fused_ordering(821) 00:13:23.027 fused_ordering(822) 00:13:23.027 fused_ordering(823) 00:13:23.027 fused_ordering(824) 00:13:23.027 fused_ordering(825) 00:13:23.027 fused_ordering(826) 00:13:23.027 fused_ordering(827) 00:13:23.027 fused_ordering(828) 00:13:23.027 fused_ordering(829) 00:13:23.027 fused_ordering(830) 00:13:23.027 fused_ordering(831) 00:13:23.027 fused_ordering(832) 00:13:23.027 fused_ordering(833) 00:13:23.027 fused_ordering(834) 00:13:23.027 fused_ordering(835) 00:13:23.027 fused_ordering(836) 00:13:23.027 fused_ordering(837) 00:13:23.027 fused_ordering(838) 00:13:23.027 fused_ordering(839) 00:13:23.027 fused_ordering(840) 00:13:23.027 fused_ordering(841) 00:13:23.027 fused_ordering(842) 00:13:23.027 fused_ordering(843) 00:13:23.027 fused_ordering(844) 00:13:23.027 fused_ordering(845) 00:13:23.027 fused_ordering(846) 00:13:23.027 fused_ordering(847) 00:13:23.027 fused_ordering(848) 00:13:23.027 fused_ordering(849) 00:13:23.027 fused_ordering(850) 00:13:23.027 fused_ordering(851) 00:13:23.027 fused_ordering(852) 00:13:23.027 fused_ordering(853) 00:13:23.027 fused_ordering(854) 00:13:23.027 fused_ordering(855) 00:13:23.027 fused_ordering(856) 00:13:23.027 fused_ordering(857) 00:13:23.027 fused_ordering(858) 00:13:23.027 fused_ordering(859) 00:13:23.027 fused_ordering(860) 00:13:23.027 fused_ordering(861) 00:13:23.027 fused_ordering(862) 00:13:23.027 fused_ordering(863) 00:13:23.027 fused_ordering(864) 00:13:23.027 fused_ordering(865) 00:13:23.027 fused_ordering(866) 00:13:23.027 fused_ordering(867) 00:13:23.027 fused_ordering(868) 00:13:23.027 fused_ordering(869) 00:13:23.027 fused_ordering(870) 00:13:23.027 fused_ordering(871) 00:13:23.027 fused_ordering(872) 00:13:23.027 fused_ordering(873) 00:13:23.027 fused_ordering(874) 00:13:23.027 fused_ordering(875) 00:13:23.027 fused_ordering(876) 00:13:23.027 fused_ordering(877) 00:13:23.027 fused_ordering(878) 00:13:23.027 fused_ordering(879) 00:13:23.027 fused_ordering(880) 00:13:23.027 fused_ordering(881) 00:13:23.027 fused_ordering(882) 00:13:23.027 fused_ordering(883) 00:13:23.027 fused_ordering(884) 00:13:23.027 fused_ordering(885) 00:13:23.027 fused_ordering(886) 00:13:23.027 fused_ordering(887) 00:13:23.027 fused_ordering(888) 00:13:23.027 fused_ordering(889) 00:13:23.027 fused_ordering(890) 00:13:23.027 fused_ordering(891) 00:13:23.027 fused_ordering(892) 00:13:23.027 fused_ordering(893) 00:13:23.027 fused_ordering(894) 00:13:23.027 fused_ordering(895) 00:13:23.027 fused_ordering(896) 00:13:23.027 fused_ordering(897) 00:13:23.027 fused_ordering(898) 00:13:23.027 fused_ordering(899) 00:13:23.027 fused_ordering(900) 00:13:23.027 fused_ordering(901) 00:13:23.027 fused_ordering(902) 00:13:23.027 fused_ordering(903) 00:13:23.027 fused_ordering(904) 00:13:23.027 fused_ordering(905) 00:13:23.027 fused_ordering(906) 00:13:23.027 fused_ordering(907) 00:13:23.027 fused_ordering(908) 00:13:23.027 fused_ordering(909) 00:13:23.027 fused_ordering(910) 00:13:23.027 fused_ordering(911) 00:13:23.027 fused_ordering(912) 00:13:23.027 fused_ordering(913) 00:13:23.027 fused_ordering(914) 00:13:23.027 fused_ordering(915) 00:13:23.027 fused_ordering(916) 00:13:23.027 fused_ordering(917) 00:13:23.027 fused_ordering(918) 00:13:23.027 fused_ordering(919) 00:13:23.027 fused_ordering(920) 00:13:23.027 fused_ordering(921) 00:13:23.027 fused_ordering(922) 00:13:23.027 fused_ordering(923) 00:13:23.027 fused_ordering(924) 00:13:23.027 fused_ordering(925) 00:13:23.027 fused_ordering(926) 00:13:23.027 fused_ordering(927) 00:13:23.027 fused_ordering(928) 00:13:23.027 fused_ordering(929) 00:13:23.027 fused_ordering(930) 00:13:23.027 fused_ordering(931) 00:13:23.027 fused_ordering(932) 00:13:23.027 fused_ordering(933) 00:13:23.027 fused_ordering(934) 00:13:23.027 fused_ordering(935) 00:13:23.027 fused_ordering(936) 00:13:23.027 fused_ordering(937) 00:13:23.027 fused_ordering(938) 00:13:23.027 fused_ordering(939) 00:13:23.027 fused_ordering(940) 00:13:23.027 fused_ordering(941) 00:13:23.027 fused_ordering(942) 00:13:23.027 fused_ordering(943) 00:13:23.027 fused_ordering(944) 00:13:23.027 fused_ordering(945) 00:13:23.027 fused_ordering(946) 00:13:23.027 fused_ordering(947) 00:13:23.027 fused_ordering(948) 00:13:23.027 fused_ordering(949) 00:13:23.027 fused_ordering(950) 00:13:23.027 fused_ordering(951) 00:13:23.027 fused_ordering(952) 00:13:23.027 fused_ordering(953) 00:13:23.027 fused_ordering(954) 00:13:23.027 fused_ordering(955) 00:13:23.027 fused_ordering(956) 00:13:23.027 fused_ordering(957) 00:13:23.027 fused_ordering(958) 00:13:23.027 fused_ordering(959) 00:13:23.027 fused_ordering(960) 00:13:23.027 fused_ordering(961) 00:13:23.027 fused_ordering(962) 00:13:23.027 fused_ordering(963) 00:13:23.027 fused_ordering(964) 00:13:23.027 fused_ordering(965) 00:13:23.027 fused_ordering(966) 00:13:23.027 fused_ordering(967) 00:13:23.027 fused_ordering(968) 00:13:23.027 fused_ordering(969) 00:13:23.027 fused_ordering(970) 00:13:23.027 fused_ordering(971) 00:13:23.027 fused_ordering(972) 00:13:23.027 fused_ordering(973) 00:13:23.027 fused_ordering(974) 00:13:23.027 fused_ordering(975) 00:13:23.027 fused_ordering(976) 00:13:23.027 fused_ordering(977) 00:13:23.027 fused_ordering(978) 00:13:23.027 fused_ordering(979) 00:13:23.027 fused_ordering(980) 00:13:23.027 fused_ordering(981) 00:13:23.027 fused_ordering(982) 00:13:23.027 fused_ordering(983) 00:13:23.027 fused_ordering(984) 00:13:23.027 fused_ordering(985) 00:13:23.027 fused_ordering(986) 00:13:23.027 fused_ordering(987) 00:13:23.027 fused_ordering(988) 00:13:23.027 fused_ordering(989) 00:13:23.027 fused_ordering(990) 00:13:23.027 fused_ordering(991) 00:13:23.027 fused_ordering(992) 00:13:23.027 fused_ordering(993) 00:13:23.027 fused_ordering(994) 00:13:23.027 fused_ordering(995) 00:13:23.027 fused_ordering(996) 00:13:23.027 fused_ordering(997) 00:13:23.027 fused_ordering(998) 00:13:23.027 fused_ordering(999) 00:13:23.027 fused_ordering(1000) 00:13:23.027 fused_ordering(1001) 00:13:23.027 fused_ordering(1002) 00:13:23.027 fused_ordering(1003) 00:13:23.027 fused_ordering(1004) 00:13:23.027 fused_ordering(1005) 00:13:23.027 fused_ordering(1006) 00:13:23.027 fused_ordering(1007) 00:13:23.027 fused_ordering(1008) 00:13:23.027 fused_ordering(1009) 00:13:23.027 fused_ordering(1010) 00:13:23.027 fused_ordering(1011) 00:13:23.027 fused_ordering(1012) 00:13:23.027 fused_ordering(1013) 00:13:23.027 fused_ordering(1014) 00:13:23.027 fused_ordering(1015) 00:13:23.027 fused_ordering(1016) 00:13:23.027 fused_ordering(1017) 00:13:23.027 fused_ordering(1018) 00:13:23.027 fused_ordering(1019) 00:13:23.027 fused_ordering(1020) 00:13:23.027 fused_ordering(1021) 00:13:23.027 fused_ordering(1022) 00:13:23.027 fused_ordering(1023) 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.027 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.027 rmmod nvme_tcp 00:13:23.027 rmmod nvme_fabrics 00:13:23.027 rmmod nvme_keyring 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 390916 ']' 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 390916 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 390916 ']' 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 390916 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390916 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390916' 00:13:23.287 killing process with pid 390916 00:13:23.287 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 390916 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 390916 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.288 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.824 00:13:25.824 real 0m10.646s 00:13:25.824 user 0m4.976s 00:13:25.824 sys 0m5.812s 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:25.824 ************************************ 00:13:25.824 END TEST nvmf_fused_ordering 00:13:25.824 ************************************ 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.824 ************************************ 00:13:25.824 START TEST nvmf_ns_masking 00:13:25.824 ************************************ 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:25.824 * Looking for test storage... 00:13:25.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.824 --rc genhtml_branch_coverage=1 00:13:25.824 --rc genhtml_function_coverage=1 00:13:25.824 --rc genhtml_legend=1 00:13:25.824 --rc geninfo_all_blocks=1 00:13:25.824 --rc geninfo_unexecuted_blocks=1 00:13:25.824 00:13:25.824 ' 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.824 --rc genhtml_branch_coverage=1 00:13:25.824 --rc genhtml_function_coverage=1 00:13:25.824 --rc genhtml_legend=1 00:13:25.824 --rc geninfo_all_blocks=1 00:13:25.824 --rc geninfo_unexecuted_blocks=1 00:13:25.824 00:13:25.824 ' 00:13:25.824 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.825 --rc genhtml_branch_coverage=1 00:13:25.825 --rc genhtml_function_coverage=1 00:13:25.825 --rc genhtml_legend=1 00:13:25.825 --rc geninfo_all_blocks=1 00:13:25.825 --rc geninfo_unexecuted_blocks=1 00:13:25.825 00:13:25.825 ' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.825 --rc genhtml_branch_coverage=1 00:13:25.825 --rc genhtml_function_coverage=1 00:13:25.825 --rc genhtml_legend=1 00:13:25.825 --rc geninfo_all_blocks=1 00:13:25.825 --rc geninfo_unexecuted_blocks=1 00:13:25.825 00:13:25.825 ' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3b41664c-7438-430f-9948-09c99b71afc5 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=43ee35fc-a860-47a2-bde6-936914cf832b 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ff72bc86-b15b-441d-a330-18443fb568eb 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:25.825 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:32.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:32.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:32.406 Found net devices under 0000:86:00.0: cvl_0_0 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.406 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:32.407 Found net devices under 0000:86:00.1: cvl_0_1 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:32.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:13:32.407 00:13:32.407 --- 10.0.0.2 ping statistics --- 00:13:32.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.407 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:32.407 00:13:32.407 --- 10.0.0.1 ping statistics --- 00:13:32.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.407 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=394919 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 394919 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 394919 ']' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:32.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.407 [2024-11-20 12:23:14.735581] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:32.407 [2024-11-20 12:23:14.735630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.407 [2024-11-20 12:23:14.814787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.407 [2024-11-20 12:23:14.853439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.407 [2024-11-20 12:23:14.853476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.407 [2024-11-20 12:23:14.853483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.407 [2024-11-20 12:23:14.853489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.407 [2024-11-20 12:23:14.853494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.407 [2024-11-20 12:23:14.854068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.407 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:32.407 [2024-11-20 12:23:15.161598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.407 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:32.407 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:32.407 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:32.407 Malloc1 00:13:32.407 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:32.666 Malloc2 00:13:32.666 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:32.925 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:33.184 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.184 [2024-11-20 12:23:16.215875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.184 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:33.184 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ff72bc86-b15b-441d-a330-18443fb568eb -a 10.0.0.2 -s 4420 -i 4 00:13:33.443 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.443 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.443 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.443 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:33.443 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:35.348 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.607 [ 0]:0x1 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=062cef9bf6114b55836fb27d0debe690 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 062cef9bf6114b55836fb27d0debe690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.607 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:35.866 [ 0]:0x1 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=062cef9bf6114b55836fb27d0debe690 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 062cef9bf6114b55836fb27d0debe690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:35.866 [ 1]:0x2 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:35.866 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.125 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.383 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ff72bc86-b15b-441d-a330-18443fb568eb -a 10.0.0.2 -s 4420 -i 4 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:36.641 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.544 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.544 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.544 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.802 [ 0]:0x2 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.802 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.138 [ 0]:0x1 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=062cef9bf6114b55836fb27d0debe690 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 062cef9bf6114b55836fb27d0debe690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.138 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.138 [ 1]:0x2 00:13:39.139 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.139 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.139 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:39.139 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.139 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.461 [ 0]:0x2 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.461 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:39.733 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:39.733 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ff72bc86-b15b-441d-a330-18443fb568eb -a 10.0.0.2 -s 4420 -i 4 00:13:39.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:39.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:39.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:39.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:39.991 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:41.891 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:42.149 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.150 [ 0]:0x1 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=062cef9bf6114b55836fb27d0debe690 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 062cef9bf6114b55836fb27d0debe690 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.150 [ 1]:0x2 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.150 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.409 [ 0]:0x2 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:42.409 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:42.410 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:42.669 [2024-11-20 12:23:25.666494] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:42.669 request: 00:13:42.669 { 00:13:42.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.669 "nsid": 2, 00:13:42.669 "host": "nqn.2016-06.io.spdk:host1", 00:13:42.669 "method": "nvmf_ns_remove_host", 00:13:42.669 "req_id": 1 00:13:42.669 } 00:13:42.669 Got JSON-RPC error response 00:13:42.669 response: 00:13:42.669 { 00:13:42.669 "code": -32602, 00:13:42.669 "message": "Invalid parameters" 00:13:42.669 } 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.669 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.928 [ 0]:0x2 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8973eafa0cc42788051ecf8228eedd7 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8973eafa0cc42788051ecf8228eedd7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=396929 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 396929 /var/tmp/host.sock 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 396929 ']' 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:42.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.928 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:42.928 [2024-11-20 12:23:26.032895] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:42.928 [2024-11-20 12:23:26.032941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396929 ] 00:13:43.187 [2024-11-20 12:23:26.108571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.187 [2024-11-20 12:23:26.149286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.446 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.446 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:43.446 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.446 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.704 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3b41664c-7438-430f-9948-09c99b71afc5 00:13:43.704 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:43.704 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3B41664C7438430F994809C99B71AFC5 -i 00:13:43.963 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 43ee35fc-a860-47a2-bde6-936914cf832b 00:13:43.963 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:43.963 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 43EE35FCA86047A2BDE6936914CF832B -i 00:13:44.223 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.223 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:44.481 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:44.481 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:45.048 nvme0n1 00:13:45.048 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:45.048 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:45.307 nvme1n2 00:13:45.307 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:45.307 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:45.307 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:45.307 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:45.307 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:45.566 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:45.566 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:45.566 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:45.566 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:45.826 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3b41664c-7438-430f-9948-09c99b71afc5 == \3\b\4\1\6\6\4\c\-\7\4\3\8\-\4\3\0\f\-\9\9\4\8\-\0\9\c\9\9\b\7\1\a\f\c\5 ]] 00:13:45.826 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:45.826 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:45.826 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:45.827 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 43ee35fc-a860-47a2-bde6-936914cf832b == \4\3\e\e\3\5\f\c\-\a\8\6\0\-\4\7\a\2\-\b\d\e\6\-\9\3\6\9\1\4\c\f\8\3\2\b ]] 00:13:45.827 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.086 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3b41664c-7438-430f-9948-09c99b71afc5 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3B41664C7438430F994809C99B71AFC5 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3B41664C7438430F994809C99B71AFC5 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.343 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.344 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.344 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.344 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.344 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.344 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:46.344 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3B41664C7438430F994809C99B71AFC5 00:13:46.602 [2024-11-20 12:23:29.513148] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:46.602 [2024-11-20 12:23:29.513182] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:46.602 [2024-11-20 12:23:29.513191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:46.602 request: 00:13:46.602 { 00:13:46.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.602 "namespace": { 00:13:46.602 "bdev_name": "invalid", 00:13:46.602 "nsid": 1, 00:13:46.602 "nguid": "3B41664C7438430F994809C99B71AFC5", 00:13:46.602 "no_auto_visible": false 00:13:46.602 }, 00:13:46.602 "method": "nvmf_subsystem_add_ns", 00:13:46.602 "req_id": 1 00:13:46.602 } 00:13:46.602 Got JSON-RPC error response 00:13:46.602 response: 00:13:46.602 { 00:13:46.602 "code": -32602, 00:13:46.602 "message": "Invalid parameters" 00:13:46.602 } 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3b41664c-7438-430f-9948-09c99b71afc5 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.602 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3B41664C7438430F994809C99B71AFC5 -i 00:13:46.862 12:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:48.765 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:48.765 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:48.765 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 396929 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 396929 ']' 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 396929 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.024 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396929 00:13:49.024 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:49.024 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:49.024 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396929' 00:13:49.024 killing process with pid 396929 00:13:49.024 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 396929 00:13:49.024 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 396929 00:13:49.282 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.541 rmmod nvme_tcp 00:13:49.541 rmmod nvme_fabrics 00:13:49.541 rmmod nvme_keyring 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 394919 ']' 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 394919 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 394919 ']' 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 394919 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.541 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394919 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394919' 00:13:49.800 killing process with pid 394919 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 394919 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 394919 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.800 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.340 00:13:52.340 real 0m26.434s 00:13:52.340 user 0m31.625s 00:13:52.340 sys 0m7.103s 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.340 ************************************ 00:13:52.340 END TEST nvmf_ns_masking 00:13:52.340 ************************************ 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.340 12:23:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.340 ************************************ 00:13:52.340 START TEST nvmf_nvme_cli 00:13:52.340 ************************************ 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:52.340 * Looking for test storage... 00:13:52.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.340 --rc genhtml_branch_coverage=1 00:13:52.340 --rc genhtml_function_coverage=1 00:13:52.340 --rc genhtml_legend=1 00:13:52.340 --rc geninfo_all_blocks=1 00:13:52.340 --rc geninfo_unexecuted_blocks=1 00:13:52.340 00:13:52.340 ' 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.340 --rc genhtml_branch_coverage=1 00:13:52.340 --rc genhtml_function_coverage=1 00:13:52.340 --rc genhtml_legend=1 00:13:52.340 --rc geninfo_all_blocks=1 00:13:52.340 --rc geninfo_unexecuted_blocks=1 00:13:52.340 00:13:52.340 ' 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.340 --rc genhtml_branch_coverage=1 00:13:52.340 --rc genhtml_function_coverage=1 00:13:52.340 --rc genhtml_legend=1 00:13:52.340 --rc geninfo_all_blocks=1 00:13:52.340 --rc geninfo_unexecuted_blocks=1 00:13:52.340 00:13:52.340 ' 00:13:52.340 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.340 --rc genhtml_branch_coverage=1 00:13:52.340 --rc genhtml_function_coverage=1 00:13:52.340 --rc genhtml_legend=1 00:13:52.340 --rc geninfo_all_blocks=1 00:13:52.340 --rc geninfo_unexecuted_blocks=1 00:13:52.340 00:13:52.340 ' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.341 12:23:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:58.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:58.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:58.915 Found net devices under 0000:86:00.0: cvl_0_0 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.915 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:58.916 Found net devices under 0000:86:00.1: cvl_0_1 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.916 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:58.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:13:58.916 00:13:58.916 --- 10.0.0.2 ping statistics --- 00:13:58.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.916 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:58.916 00:13:58.916 --- 10.0.0.1 ping statistics --- 00:13:58.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.916 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=401639 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 401639 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 401639 ']' 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 [2024-11-20 12:23:41.235269] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:13:58.916 [2024-11-20 12:23:41.235316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.916 [2024-11-20 12:23:41.314697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:58.916 [2024-11-20 12:23:41.358389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.916 [2024-11-20 12:23:41.358428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.916 [2024-11-20 12:23:41.358435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.916 [2024-11-20 12:23:41.358441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.916 [2024-11-20 12:23:41.358446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.916 [2024-11-20 12:23:41.360075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.916 [2024-11-20 12:23:41.360182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.916 [2024-11-20 12:23:41.360315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.916 [2024-11-20 12:23:41.360317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 [2024-11-20 12:23:41.496825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 Malloc0 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 Malloc1 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.916 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.917 [2024-11-20 12:23:41.594056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:58.917 00:13:58.917 Discovery Log Number of Records 2, Generation counter 2 00:13:58.917 =====Discovery Log Entry 0====== 00:13:58.917 trtype: tcp 00:13:58.917 adrfam: ipv4 00:13:58.917 subtype: current discovery subsystem 00:13:58.917 treq: not required 00:13:58.917 portid: 0 00:13:58.917 trsvcid: 4420 00:13:58.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:58.917 traddr: 10.0.0.2 00:13:58.917 eflags: explicit discovery connections, duplicate discovery information 00:13:58.917 sectype: none 00:13:58.917 =====Discovery Log Entry 1====== 00:13:58.917 trtype: tcp 00:13:58.917 adrfam: ipv4 00:13:58.917 subtype: nvme subsystem 00:13:58.917 treq: not required 00:13:58.917 portid: 0 00:13:58.917 trsvcid: 4420 00:13:58.917 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:58.917 traddr: 10.0.0.2 00:13:58.917 eflags: none 00:13:58.917 sectype: none 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:58.917 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:00.296 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:00.296 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:00.296 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.296 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:00.296 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:00.296 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:02.203 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:02.203 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:02.203 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:02.203 /dev/nvme0n2 ]] 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.203 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:02.462 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:02.721 rmmod nvme_tcp 00:14:02.721 rmmod nvme_fabrics 00:14:02.721 rmmod nvme_keyring 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 401639 ']' 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 401639 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 401639 ']' 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 401639 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 401639 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 401639' 00:14:02.721 killing process with pid 401639 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 401639 00:14:02.721 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 401639 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:02.981 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:02.981 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:02.981 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:02.981 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.981 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.981 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.518 00:14:05.518 real 0m13.059s 00:14:05.518 user 0m20.155s 00:14:05.518 sys 0m5.110s 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.518 ************************************ 00:14:05.518 END TEST nvmf_nvme_cli 00:14:05.518 ************************************ 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.518 ************************************ 00:14:05.518 START TEST nvmf_vfio_user 00:14:05.518 ************************************ 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:05.518 * Looking for test storage... 00:14:05.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.518 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.519 --rc genhtml_branch_coverage=1 00:14:05.519 --rc genhtml_function_coverage=1 00:14:05.519 --rc genhtml_legend=1 00:14:05.519 --rc geninfo_all_blocks=1 00:14:05.519 --rc geninfo_unexecuted_blocks=1 00:14:05.519 00:14:05.519 ' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.519 --rc genhtml_branch_coverage=1 00:14:05.519 --rc genhtml_function_coverage=1 00:14:05.519 --rc genhtml_legend=1 00:14:05.519 --rc geninfo_all_blocks=1 00:14:05.519 --rc geninfo_unexecuted_blocks=1 00:14:05.519 00:14:05.519 ' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.519 --rc genhtml_branch_coverage=1 00:14:05.519 --rc genhtml_function_coverage=1 00:14:05.519 --rc genhtml_legend=1 00:14:05.519 --rc geninfo_all_blocks=1 00:14:05.519 --rc geninfo_unexecuted_blocks=1 00:14:05.519 00:14:05.519 ' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.519 --rc genhtml_branch_coverage=1 00:14:05.519 --rc genhtml_function_coverage=1 00:14:05.519 --rc genhtml_legend=1 00:14:05.519 --rc geninfo_all_blocks=1 00:14:05.519 --rc geninfo_unexecuted_blocks=1 00:14:05.519 00:14:05.519 ' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:05.519 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=402934 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 402934' 00:14:05.520 Process pid: 402934 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 402934 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 402934 ']' 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.520 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:05.520 [2024-11-20 12:23:48.415988] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:05.520 [2024-11-20 12:23:48.416034] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.520 [2024-11-20 12:23:48.491920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.520 [2024-11-20 12:23:48.532580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.520 [2024-11-20 12:23:48.532617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.520 [2024-11-20 12:23:48.532624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.520 [2024-11-20 12:23:48.532630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.520 [2024-11-20 12:23:48.532635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.520 [2024-11-20 12:23:48.534117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.520 [2024-11-20 12:23:48.534148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.520 [2024-11-20 12:23:48.534183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.520 [2024-11-20 12:23:48.534184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.779 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.779 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:05.779 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:06.714 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:06.973 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:06.973 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:06.973 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:06.973 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:06.973 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:06.973 Malloc1 00:14:06.973 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:07.231 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:07.490 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:07.749 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:07.749 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:07.749 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:08.008 Malloc2 00:14:08.008 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:08.267 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:08.267 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:08.528 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:08.528 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:08.528 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:08.528 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:08.528 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:08.528 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:08.528 [2024-11-20 12:23:51.552656] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:08.528 [2024-11-20 12:23:51.552689] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid403423 ] 00:14:08.528 [2024-11-20 12:23:51.593999] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:08.528 [2024-11-20 12:23:51.606339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:08.528 [2024-11-20 12:23:51.606362] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f284850f000 00:14:08.528 [2024-11-20 12:23:51.607340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:08.528 [2024-11-20 12:23:51.608340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.609345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.610350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.611357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.612363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.613369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.614377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:08.529 [2024-11-20 12:23:51.615386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:08.529 [2024-11-20 12:23:51.615396] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2848504000 00:14:08.529 [2024-11-20 12:23:51.616340] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:08.529 [2024-11-20 12:23:51.627953] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:08.529 [2024-11-20 12:23:51.627978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:08.529 [2024-11-20 12:23:51.633493] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:08.529 [2024-11-20 12:23:51.633529] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:08.529 [2024-11-20 12:23:51.633595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:08.529 [2024-11-20 12:23:51.633609] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:08.529 [2024-11-20 12:23:51.633614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:08.529 [2024-11-20 12:23:51.634497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:08.529 [2024-11-20 12:23:51.634506] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:08.529 [2024-11-20 12:23:51.634513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:08.529 [2024-11-20 12:23:51.635508] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:08.529 [2024-11-20 12:23:51.635516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:08.529 [2024-11-20 12:23:51.635523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:08.529 [2024-11-20 12:23:51.636517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:08.529 [2024-11-20 12:23:51.636524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:08.529 [2024-11-20 12:23:51.637522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:08.529 [2024-11-20 12:23:51.637529] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:08.529 [2024-11-20 12:23:51.637534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:08.529 [2024-11-20 12:23:51.637539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:08.529 [2024-11-20 12:23:51.637647] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:08.529 [2024-11-20 12:23:51.637652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:08.529 [2024-11-20 12:23:51.637656] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:08.529 [2024-11-20 12:23:51.638545] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:08.529 [2024-11-20 12:23:51.639535] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:08.529 [2024-11-20 12:23:51.640547] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:08.529 [2024-11-20 12:23:51.641544] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:08.529 [2024-11-20 12:23:51.641613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:08.529 [2024-11-20 12:23:51.642564] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:08.529 [2024-11-20 12:23:51.642572] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:08.529 [2024-11-20 12:23:51.642576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:08.529 [2024-11-20 12:23:51.642603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642618] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:08.529 [2024-11-20 12:23:51.642622] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:08.529 [2024-11-20 12:23:51.642626] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.529 [2024-11-20 12:23:51.642639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:08.529 [2024-11-20 12:23:51.642677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:08.529 [2024-11-20 12:23:51.642686] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:08.529 [2024-11-20 12:23:51.642690] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:08.529 [2024-11-20 12:23:51.642694] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:08.529 [2024-11-20 12:23:51.642698] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:08.529 [2024-11-20 12:23:51.642704] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:08.529 [2024-11-20 12:23:51.642709] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:08.529 [2024-11-20 12:23:51.642713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:08.529 [2024-11-20 12:23:51.642741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:08.529 [2024-11-20 12:23:51.642750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.529 [2024-11-20 12:23:51.642757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.529 [2024-11-20 12:23:51.642765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.529 [2024-11-20 12:23:51.642773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:08.529 [2024-11-20 12:23:51.642777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:08.529 [2024-11-20 12:23:51.642801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:08.529 [2024-11-20 12:23:51.642808] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:08.529 [2024-11-20 12:23:51.642813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:08.529 [2024-11-20 12:23:51.642842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:08.529 [2024-11-20 12:23:51.642892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:08.529 [2024-11-20 12:23:51.642905] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:08.529 [2024-11-20 12:23:51.642909] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:08.529 [2024-11-20 12:23:51.642912] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.529 [2024-11-20 12:23:51.642918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:08.529 [2024-11-20 12:23:51.642933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:08.529 [2024-11-20 12:23:51.642940] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:08.529 [2024-11-20 12:23:51.642952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.642959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.642965] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:08.530 [2024-11-20 12:23:51.642969] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:08.530 [2024-11-20 12:23:51.642972] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.530 [2024-11-20 12:23:51.642977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.642994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:08.530 [2024-11-20 12:23:51.643005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643020] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:08.530 [2024-11-20 12:23:51.643024] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:08.530 [2024-11-20 12:23:51.643027] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.530 [2024-11-20 12:23:51.643032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:08.530 [2024-11-20 12:23:51.643054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643085] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:08.530 [2024-11-20 12:23:51.643089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:08.530 [2024-11-20 12:23:51.643094] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:08.530 [2024-11-20 12:23:51.643110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:08.530 [2024-11-20 12:23:51.643133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:08.530 [2024-11-20 12:23:51.643151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:08.530 [2024-11-20 12:23:51.643171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:08.530 [2024-11-20 12:23:51.643190] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:08.530 [2024-11-20 12:23:51.643195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:08.530 [2024-11-20 12:23:51.643198] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:08.530 [2024-11-20 12:23:51.643202] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:08.530 [2024-11-20 12:23:51.643205] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:08.530 [2024-11-20 12:23:51.643211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:08.530 [2024-11-20 12:23:51.643218] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:08.530 [2024-11-20 12:23:51.643222] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:08.530 [2024-11-20 12:23:51.643225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.530 [2024-11-20 12:23:51.643230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643236] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:08.530 [2024-11-20 12:23:51.643240] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:08.530 [2024-11-20 12:23:51.643243] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.530 [2024-11-20 12:23:51.643248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643255] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:08.530 [2024-11-20 12:23:51.643259] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:08.530 [2024-11-20 12:23:51.643262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:08.530 [2024-11-20 12:23:51.643267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:08.530 [2024-11-20 12:23:51.643273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:08.790 [2024-11-20 12:23:51.643284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:08.790 [2024-11-20 12:23:51.643295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:08.790 [2024-11-20 12:23:51.643301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:08.790 ===================================================== 00:14:08.790 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:08.790 ===================================================== 00:14:08.790 Controller Capabilities/Features 00:14:08.790 ================================ 00:14:08.790 Vendor ID: 4e58 00:14:08.790 Subsystem Vendor ID: 4e58 00:14:08.790 Serial Number: SPDK1 00:14:08.790 Model Number: SPDK bdev Controller 00:14:08.790 Firmware Version: 25.01 00:14:08.790 Recommended Arb Burst: 6 00:14:08.790 IEEE OUI Identifier: 8d 6b 50 00:14:08.790 Multi-path I/O 00:14:08.790 May have multiple subsystem ports: Yes 00:14:08.790 May have multiple controllers: Yes 00:14:08.790 Associated with SR-IOV VF: No 00:14:08.790 Max Data Transfer Size: 131072 00:14:08.790 Max Number of Namespaces: 32 00:14:08.790 Max Number of I/O Queues: 127 00:14:08.790 NVMe Specification Version (VS): 1.3 00:14:08.790 NVMe Specification Version (Identify): 1.3 00:14:08.790 Maximum Queue Entries: 256 00:14:08.790 Contiguous Queues Required: Yes 00:14:08.790 Arbitration Mechanisms Supported 00:14:08.790 Weighted Round Robin: Not Supported 00:14:08.790 Vendor Specific: Not Supported 00:14:08.790 Reset Timeout: 15000 ms 00:14:08.790 Doorbell Stride: 4 bytes 00:14:08.790 NVM Subsystem Reset: Not Supported 00:14:08.790 Command Sets Supported 00:14:08.790 NVM Command Set: Supported 00:14:08.790 Boot Partition: Not Supported 00:14:08.790 Memory Page Size Minimum: 4096 bytes 00:14:08.790 Memory Page Size Maximum: 4096 bytes 00:14:08.790 Persistent Memory Region: Not Supported 00:14:08.790 Optional Asynchronous Events Supported 00:14:08.791 Namespace Attribute Notices: Supported 00:14:08.791 Firmware Activation Notices: Not Supported 00:14:08.791 ANA Change Notices: Not Supported 00:14:08.791 PLE Aggregate Log Change Notices: Not Supported 00:14:08.791 LBA Status Info Alert Notices: Not Supported 00:14:08.791 EGE Aggregate Log Change Notices: Not Supported 00:14:08.791 Normal NVM Subsystem Shutdown event: Not Supported 00:14:08.791 Zone Descriptor Change Notices: Not Supported 00:14:08.791 Discovery Log Change Notices: Not Supported 00:14:08.791 Controller Attributes 00:14:08.791 128-bit Host Identifier: Supported 00:14:08.791 Non-Operational Permissive Mode: Not Supported 00:14:08.791 NVM Sets: Not Supported 00:14:08.791 Read Recovery Levels: Not Supported 00:14:08.791 Endurance Groups: Not Supported 00:14:08.791 Predictable Latency Mode: Not Supported 00:14:08.791 Traffic Based Keep ALive: Not Supported 00:14:08.791 Namespace Granularity: Not Supported 00:14:08.791 SQ Associations: Not Supported 00:14:08.791 UUID List: Not Supported 00:14:08.791 Multi-Domain Subsystem: Not Supported 00:14:08.791 Fixed Capacity Management: Not Supported 00:14:08.791 Variable Capacity Management: Not Supported 00:14:08.791 Delete Endurance Group: Not Supported 00:14:08.791 Delete NVM Set: Not Supported 00:14:08.791 Extended LBA Formats Supported: Not Supported 00:14:08.791 Flexible Data Placement Supported: Not Supported 00:14:08.791 00:14:08.791 Controller Memory Buffer Support 00:14:08.791 ================================ 00:14:08.791 Supported: No 00:14:08.791 00:14:08.791 Persistent Memory Region Support 00:14:08.791 ================================ 00:14:08.791 Supported: No 00:14:08.791 00:14:08.791 Admin Command Set Attributes 00:14:08.791 ============================ 00:14:08.791 Security Send/Receive: Not Supported 00:14:08.791 Format NVM: Not Supported 00:14:08.791 Firmware Activate/Download: Not Supported 00:14:08.791 Namespace Management: Not Supported 00:14:08.791 Device Self-Test: Not Supported 00:14:08.791 Directives: Not Supported 00:14:08.791 NVMe-MI: Not Supported 00:14:08.791 Virtualization Management: Not Supported 00:14:08.791 Doorbell Buffer Config: Not Supported 00:14:08.791 Get LBA Status Capability: Not Supported 00:14:08.791 Command & Feature Lockdown Capability: Not Supported 00:14:08.791 Abort Command Limit: 4 00:14:08.791 Async Event Request Limit: 4 00:14:08.791 Number of Firmware Slots: N/A 00:14:08.791 Firmware Slot 1 Read-Only: N/A 00:14:08.791 Firmware Activation Without Reset: N/A 00:14:08.791 Multiple Update Detection Support: N/A 00:14:08.791 Firmware Update Granularity: No Information Provided 00:14:08.791 Per-Namespace SMART Log: No 00:14:08.791 Asymmetric Namespace Access Log Page: Not Supported 00:14:08.791 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:08.791 Command Effects Log Page: Supported 00:14:08.791 Get Log Page Extended Data: Supported 00:14:08.791 Telemetry Log Pages: Not Supported 00:14:08.791 Persistent Event Log Pages: Not Supported 00:14:08.791 Supported Log Pages Log Page: May Support 00:14:08.791 Commands Supported & Effects Log Page: Not Supported 00:14:08.791 Feature Identifiers & Effects Log Page:May Support 00:14:08.791 NVMe-MI Commands & Effects Log Page: May Support 00:14:08.791 Data Area 4 for Telemetry Log: Not Supported 00:14:08.791 Error Log Page Entries Supported: 128 00:14:08.791 Keep Alive: Supported 00:14:08.791 Keep Alive Granularity: 10000 ms 00:14:08.791 00:14:08.791 NVM Command Set Attributes 00:14:08.791 ========================== 00:14:08.791 Submission Queue Entry Size 00:14:08.791 Max: 64 00:14:08.791 Min: 64 00:14:08.791 Completion Queue Entry Size 00:14:08.791 Max: 16 00:14:08.791 Min: 16 00:14:08.791 Number of Namespaces: 32 00:14:08.791 Compare Command: Supported 00:14:08.791 Write Uncorrectable Command: Not Supported 00:14:08.791 Dataset Management Command: Supported 00:14:08.791 Write Zeroes Command: Supported 00:14:08.791 Set Features Save Field: Not Supported 00:14:08.791 Reservations: Not Supported 00:14:08.791 Timestamp: Not Supported 00:14:08.791 Copy: Supported 00:14:08.791 Volatile Write Cache: Present 00:14:08.791 Atomic Write Unit (Normal): 1 00:14:08.791 Atomic Write Unit (PFail): 1 00:14:08.791 Atomic Compare & Write Unit: 1 00:14:08.791 Fused Compare & Write: Supported 00:14:08.791 Scatter-Gather List 00:14:08.791 SGL Command Set: Supported (Dword aligned) 00:14:08.791 SGL Keyed: Not Supported 00:14:08.791 SGL Bit Bucket Descriptor: Not Supported 00:14:08.791 SGL Metadata Pointer: Not Supported 00:14:08.791 Oversized SGL: Not Supported 00:14:08.791 SGL Metadata Address: Not Supported 00:14:08.791 SGL Offset: Not Supported 00:14:08.791 Transport SGL Data Block: Not Supported 00:14:08.791 Replay Protected Memory Block: Not Supported 00:14:08.791 00:14:08.791 Firmware Slot Information 00:14:08.791 ========================= 00:14:08.791 Active slot: 1 00:14:08.791 Slot 1 Firmware Revision: 25.01 00:14:08.791 00:14:08.791 00:14:08.791 Commands Supported and Effects 00:14:08.791 ============================== 00:14:08.791 Admin Commands 00:14:08.791 -------------- 00:14:08.791 Get Log Page (02h): Supported 00:14:08.791 Identify (06h): Supported 00:14:08.791 Abort (08h): Supported 00:14:08.791 Set Features (09h): Supported 00:14:08.791 Get Features (0Ah): Supported 00:14:08.791 Asynchronous Event Request (0Ch): Supported 00:14:08.791 Keep Alive (18h): Supported 00:14:08.791 I/O Commands 00:14:08.791 ------------ 00:14:08.791 Flush (00h): Supported LBA-Change 00:14:08.791 Write (01h): Supported LBA-Change 00:14:08.791 Read (02h): Supported 00:14:08.791 Compare (05h): Supported 00:14:08.791 Write Zeroes (08h): Supported LBA-Change 00:14:08.791 Dataset Management (09h): Supported LBA-Change 00:14:08.791 Copy (19h): Supported LBA-Change 00:14:08.791 00:14:08.791 Error Log 00:14:08.791 ========= 00:14:08.791 00:14:08.791 Arbitration 00:14:08.791 =========== 00:14:08.791 Arbitration Burst: 1 00:14:08.791 00:14:08.791 Power Management 00:14:08.791 ================ 00:14:08.791 Number of Power States: 1 00:14:08.791 Current Power State: Power State #0 00:14:08.791 Power State #0: 00:14:08.791 Max Power: 0.00 W 00:14:08.791 Non-Operational State: Operational 00:14:08.791 Entry Latency: Not Reported 00:14:08.791 Exit Latency: Not Reported 00:14:08.791 Relative Read Throughput: 0 00:14:08.791 Relative Read Latency: 0 00:14:08.791 Relative Write Throughput: 0 00:14:08.791 Relative Write Latency: 0 00:14:08.791 Idle Power: Not Reported 00:14:08.791 Active Power: Not Reported 00:14:08.791 Non-Operational Permissive Mode: Not Supported 00:14:08.791 00:14:08.791 Health Information 00:14:08.791 ================== 00:14:08.791 Critical Warnings: 00:14:08.791 Available Spare Space: OK 00:14:08.791 Temperature: OK 00:14:08.791 Device Reliability: OK 00:14:08.791 Read Only: No 00:14:08.791 Volatile Memory Backup: OK 00:14:08.791 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:08.791 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:08.791 Available Spare: 0% 00:14:08.791 Available Sp[2024-11-20 12:23:51.643384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:08.791 [2024-11-20 12:23:51.643394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:08.791 [2024-11-20 12:23:51.643417] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:08.791 [2024-11-20 12:23:51.643426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.791 [2024-11-20 12:23:51.643431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.791 [2024-11-20 12:23:51.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.791 [2024-11-20 12:23:51.643443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:08.791 [2024-11-20 12:23:51.643568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:08.791 [2024-11-20 12:23:51.643577] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:08.791 [2024-11-20 12:23:51.644575] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:08.791 [2024-11-20 12:23:51.644623] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:08.791 [2024-11-20 12:23:51.644630] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:08.791 [2024-11-20 12:23:51.645579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:08.791 [2024-11-20 12:23:51.645589] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:08.791 [2024-11-20 12:23:51.645638] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:08.792 [2024-11-20 12:23:51.647611] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:08.792 are Threshold: 0% 00:14:08.792 Life Percentage Used: 0% 00:14:08.792 Data Units Read: 0 00:14:08.792 Data Units Written: 0 00:14:08.792 Host Read Commands: 0 00:14:08.792 Host Write Commands: 0 00:14:08.792 Controller Busy Time: 0 minutes 00:14:08.792 Power Cycles: 0 00:14:08.792 Power On Hours: 0 hours 00:14:08.792 Unsafe Shutdowns: 0 00:14:08.792 Unrecoverable Media Errors: 0 00:14:08.792 Lifetime Error Log Entries: 0 00:14:08.792 Warning Temperature Time: 0 minutes 00:14:08.792 Critical Temperature Time: 0 minutes 00:14:08.792 00:14:08.792 Number of Queues 00:14:08.792 ================ 00:14:08.792 Number of I/O Submission Queues: 127 00:14:08.792 Number of I/O Completion Queues: 127 00:14:08.792 00:14:08.792 Active Namespaces 00:14:08.792 ================= 00:14:08.792 Namespace ID:1 00:14:08.792 Error Recovery Timeout: Unlimited 00:14:08.792 Command Set Identifier: NVM (00h) 00:14:08.792 Deallocate: Supported 00:14:08.792 Deallocated/Unwritten Error: Not Supported 00:14:08.792 Deallocated Read Value: Unknown 00:14:08.792 Deallocate in Write Zeroes: Not Supported 00:14:08.792 Deallocated Guard Field: 0xFFFF 00:14:08.792 Flush: Supported 00:14:08.792 Reservation: Supported 00:14:08.792 Namespace Sharing Capabilities: Multiple Controllers 00:14:08.792 Size (in LBAs): 131072 (0GiB) 00:14:08.792 Capacity (in LBAs): 131072 (0GiB) 00:14:08.792 Utilization (in LBAs): 131072 (0GiB) 00:14:08.792 NGUID: F343E7BCCB7749F0B9123A1AACB1956E 00:14:08.792 UUID: f343e7bc-cb77-49f0-b912-3a1aacb1956e 00:14:08.792 Thin Provisioning: Not Supported 00:14:08.792 Per-NS Atomic Units: Yes 00:14:08.792 Atomic Boundary Size (Normal): 0 00:14:08.792 Atomic Boundary Size (PFail): 0 00:14:08.792 Atomic Boundary Offset: 0 00:14:08.792 Maximum Single Source Range Length: 65535 00:14:08.792 Maximum Copy Length: 65535 00:14:08.792 Maximum Source Range Count: 1 00:14:08.792 NGUID/EUI64 Never Reused: No 00:14:08.792 Namespace Write Protected: No 00:14:08.792 Number of LBA Formats: 1 00:14:08.792 Current LBA Format: LBA Format #00 00:14:08.792 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:08.792 00:14:08.792 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:08.792 [2024-11-20 12:23:51.884812] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:14.067 Initializing NVMe Controllers 00:14:14.067 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:14.067 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:14.067 Initialization complete. Launching workers. 00:14:14.067 ======================================================== 00:14:14.067 Latency(us) 00:14:14.067 Device Information : IOPS MiB/s Average min max 00:14:14.067 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39935.16 156.00 3204.78 969.58 6829.77 00:14:14.067 ======================================================== 00:14:14.067 Total : 39935.16 156.00 3204.78 969.58 6829.77 00:14:14.067 00:14:14.067 [2024-11-20 12:23:56.901893] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:14.067 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:14.067 [2024-11-20 12:23:57.135934] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:19.340 Initializing NVMe Controllers 00:14:19.340 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:19.340 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:19.340 Initialization complete. Launching workers. 00:14:19.340 ======================================================== 00:14:19.340 Latency(us) 00:14:19.340 Device Information : IOPS MiB/s Average min max 00:14:19.340 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16048.87 62.69 7974.96 6292.17 8654.91 00:14:19.340 ======================================================== 00:14:19.340 Total : 16048.87 62.69 7974.96 6292.17 8654.91 00:14:19.340 00:14:19.340 [2024-11-20 12:24:02.167451] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:19.340 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:19.340 [2024-11-20 12:24:02.370392] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:24.615 [2024-11-20 12:24:07.441246] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:24.615 Initializing NVMe Controllers 00:14:24.615 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:24.615 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:24.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:24.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:24.615 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:24.615 Initialization complete. Launching workers. 00:14:24.615 Starting thread on core 2 00:14:24.615 Starting thread on core 3 00:14:24.615 Starting thread on core 1 00:14:24.615 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:24.874 [2024-11-20 12:24:07.735746] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.177 [2024-11-20 12:24:10.797362] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.177 Initializing NVMe Controllers 00:14:28.177 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:28.177 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:28.177 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:28.177 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:28.177 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:28.177 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:28.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:28.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:28.177 Initialization complete. Launching workers. 00:14:28.177 Starting thread on core 1 with urgent priority queue 00:14:28.177 Starting thread on core 2 with urgent priority queue 00:14:28.177 Starting thread on core 3 with urgent priority queue 00:14:28.177 Starting thread on core 0 with urgent priority queue 00:14:28.177 SPDK bdev Controller (SPDK1 ) core 0: 8416.00 IO/s 11.88 secs/100000 ios 00:14:28.177 SPDK bdev Controller (SPDK1 ) core 1: 7842.33 IO/s 12.75 secs/100000 ios 00:14:28.177 SPDK bdev Controller (SPDK1 ) core 2: 8970.67 IO/s 11.15 secs/100000 ios 00:14:28.177 SPDK bdev Controller (SPDK1 ) core 3: 7830.00 IO/s 12.77 secs/100000 ios 00:14:28.177 ======================================================== 00:14:28.177 00:14:28.177 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:28.177 [2024-11-20 12:24:11.082145] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.177 Initializing NVMe Controllers 00:14:28.177 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:28.177 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:28.177 Namespace ID: 1 size: 0GB 00:14:28.177 Initialization complete. 00:14:28.177 INFO: using host memory buffer for IO 00:14:28.177 Hello world! 00:14:28.177 [2024-11-20 12:24:11.118408] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.177 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:28.437 [2024-11-20 12:24:11.404362] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.374 Initializing NVMe Controllers 00:14:29.374 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:29.374 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:29.374 Initialization complete. Launching workers. 00:14:29.374 submit (in ns) avg, min, max = 7160.9, 3285.2, 4023260.0 00:14:29.374 complete (in ns) avg, min, max = 20585.3, 1813.9, 4200628.7 00:14:29.374 00:14:29.374 Submit histogram 00:14:29.374 ================ 00:14:29.374 Range in us Cumulative Count 00:14:29.374 3.283 - 3.297: 0.0061% ( 1) 00:14:29.374 3.297 - 3.311: 0.0671% ( 10) 00:14:29.374 3.311 - 3.325: 0.2012% ( 22) 00:14:29.374 3.325 - 3.339: 0.6157% ( 68) 00:14:29.374 3.339 - 3.353: 2.2799% ( 273) 00:14:29.374 3.353 - 3.367: 6.1570% ( 636) 00:14:29.374 3.367 - 3.381: 11.5460% ( 884) 00:14:29.374 3.381 - 3.395: 17.2092% ( 929) 00:14:29.374 3.395 - 3.409: 23.3906% ( 1014) 00:14:29.374 3.409 - 3.423: 29.4197% ( 989) 00:14:29.374 3.423 - 3.437: 34.8574% ( 892) 00:14:29.374 3.437 - 3.450: 40.4657% ( 920) 00:14:29.374 3.450 - 3.464: 45.7023% ( 859) 00:14:29.374 3.464 - 3.478: 50.2804% ( 751) 00:14:29.374 3.478 - 3.492: 54.4562% ( 685) 00:14:29.374 3.492 - 3.506: 60.2414% ( 949) 00:14:29.374 3.506 - 3.520: 66.8983% ( 1092) 00:14:29.374 3.520 - 3.534: 71.6106% ( 773) 00:14:29.374 3.534 - 3.548: 76.4326% ( 791) 00:14:29.374 3.548 - 3.562: 80.5109% ( 669) 00:14:29.374 3.562 - 3.590: 85.4913% ( 817) 00:14:29.374 3.590 - 3.617: 87.3750% ( 309) 00:14:29.374 3.617 - 3.645: 88.0883% ( 117) 00:14:29.374 3.645 - 3.673: 89.3258% ( 203) 00:14:29.374 3.673 - 3.701: 91.1912% ( 306) 00:14:29.374 3.701 - 3.729: 92.9042% ( 281) 00:14:29.374 3.729 - 3.757: 94.5318% ( 267) 00:14:29.374 3.757 - 3.784: 96.2265% ( 278) 00:14:29.374 3.784 - 3.812: 97.6347% ( 231) 00:14:29.374 3.812 - 3.840: 98.5979% ( 158) 00:14:29.374 3.840 - 3.868: 99.0734% ( 78) 00:14:29.374 3.868 - 3.896: 99.4270% ( 58) 00:14:29.374 3.896 - 3.923: 99.5672% ( 23) 00:14:29.374 3.923 - 3.951: 99.6220% ( 9) 00:14:29.374 3.951 - 3.979: 99.6342% ( 2) 00:14:29.374 4.090 - 4.118: 99.6403% ( 1) 00:14:29.374 4.202 - 4.230: 99.6464% ( 1) 00:14:29.374 5.398 - 5.426: 99.6525% ( 1) 00:14:29.374 5.454 - 5.482: 99.6586% ( 1) 00:14:29.374 5.482 - 5.510: 99.6647% ( 1) 00:14:29.374 5.593 - 5.621: 99.6708% ( 1) 00:14:29.374 5.760 - 5.788: 99.6891% ( 3) 00:14:29.374 5.788 - 5.816: 99.7013% ( 2) 00:14:29.374 5.816 - 5.843: 99.7074% ( 1) 00:14:29.374 5.871 - 5.899: 99.7135% ( 1) 00:14:29.374 5.899 - 5.927: 99.7196% ( 1) 00:14:29.374 6.038 - 6.066: 99.7257% ( 1) 00:14:29.374 6.177 - 6.205: 99.7318% ( 1) 00:14:29.374 6.456 - 6.483: 99.7379% ( 1) 00:14:29.374 6.539 - 6.567: 99.7440% ( 1) 00:14:29.374 6.595 - 6.623: 99.7501% ( 1) 00:14:29.374 6.790 - 6.817: 99.7562% ( 1) 00:14:29.374 6.929 - 6.957: 99.7623% ( 1) 00:14:29.374 6.957 - 6.984: 99.7683% ( 1) 00:14:29.374 7.012 - 7.040: 99.7744% ( 1) 00:14:29.374 7.040 - 7.068: 99.7805% ( 1) 00:14:29.374 7.123 - 7.179: 99.7866% ( 1) 00:14:29.374 7.179 - 7.235: 99.7927% ( 1) 00:14:29.374 7.290 - 7.346: 99.7988% ( 1) 00:14:29.374 7.346 - 7.402: 99.8110% ( 2) 00:14:29.374 7.513 - 7.569: 99.8171% ( 1) 00:14:29.374 7.680 - 7.736: 99.8354% ( 3) 00:14:29.374 7.791 - 7.847: 99.8476% ( 2) 00:14:29.375 7.847 - 7.903: 99.8537% ( 1) 00:14:29.375 7.903 - 7.958: 99.8598% ( 1) 00:14:29.375 7.958 - 8.014: 99.8720% ( 2) 00:14:29.375 8.292 - 8.348: 99.8781% ( 1) 00:14:29.375 8.682 - 8.737: 99.8842% ( 1) 00:14:29.375 9.071 - 9.127: 99.8903% ( 1) 00:14:29.375 9.405 - 9.461: 99.8964% ( 1) 00:14:29.375 9.850 - 9.906: 99.9025% ( 1) 00:14:29.375 13.913 - 13.969: 99.9086% ( 1) 00:14:29.375 3989.148 - 4017.642: 99.9939% ( 14) 00:14:29.375 4017.642 - 4046.136: 100.0000% ( 1) 00:14:29.375 00:14:29.375 Complete histogram 00:14:29.375 ================== 00:14:29.375 Range in us Cumulative Count 00:14:29.375 1.809 - 1.823: 0.0853% ( 14) 00:14:29.375 1.823 - 1.837: 1.0363% ( 156) 00:14:29.375 1.837 - 1.850: 2.5847% ( 254) 00:14:29.375 1.850 - [2024-11-20 12:24:12.428366] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.375 1.864: 4.4014% ( 298) 00:14:29.375 1.864 - 1.878: 35.3694% ( 5080) 00:14:29.375 1.878 - 1.892: 81.0046% ( 7486) 00:14:29.375 1.892 - 1.906: 91.4045% ( 1706) 00:14:29.375 1.906 - 1.920: 94.0015% ( 426) 00:14:29.375 1.920 - 1.934: 94.5074% ( 83) 00:14:29.375 1.934 - 1.948: 95.7693% ( 207) 00:14:29.375 1.948 - 1.962: 97.8542% ( 342) 00:14:29.375 1.962 - 1.976: 99.0490% ( 196) 00:14:29.375 1.976 - 1.990: 99.2197% ( 28) 00:14:29.375 1.990 - 2.003: 99.2563% ( 6) 00:14:29.375 2.003 - 2.017: 99.2624% ( 1) 00:14:29.375 2.017 - 2.031: 99.2868% ( 4) 00:14:29.375 2.031 - 2.045: 99.2929% ( 1) 00:14:29.375 2.073 - 2.087: 99.2990% ( 1) 00:14:29.375 2.101 - 2.115: 99.3111% ( 2) 00:14:29.375 2.129 - 2.143: 99.3233% ( 2) 00:14:29.375 2.157 - 2.170: 99.3294% ( 1) 00:14:29.375 2.240 - 2.254: 99.3355% ( 1) 00:14:29.375 2.282 - 2.296: 99.3416% ( 1) 00:14:29.375 2.365 - 2.379: 99.3477% ( 1) 00:14:29.375 3.923 - 3.951: 99.3538% ( 1) 00:14:29.375 4.035 - 4.063: 99.3599% ( 1) 00:14:29.375 4.341 - 4.369: 99.3660% ( 1) 00:14:29.375 4.758 - 4.786: 99.3721% ( 1) 00:14:29.375 4.786 - 4.814: 99.3782% ( 1) 00:14:29.375 4.981 - 5.009: 99.3843% ( 1) 00:14:29.375 5.064 - 5.092: 99.3904% ( 1) 00:14:29.375 5.092 - 5.120: 99.3965% ( 1) 00:14:29.375 5.120 - 5.148: 99.4087% ( 2) 00:14:29.375 5.176 - 5.203: 99.4148% ( 1) 00:14:29.375 5.426 - 5.454: 99.4209% ( 1) 00:14:29.375 5.454 - 5.482: 99.4270% ( 1) 00:14:29.375 5.537 - 5.565: 99.4331% ( 1) 00:14:29.375 5.621 - 5.649: 99.4392% ( 1) 00:14:29.375 5.760 - 5.788: 99.4453% ( 1) 00:14:29.375 6.066 - 6.094: 99.4514% ( 1) 00:14:29.375 6.122 - 6.150: 99.4635% ( 2) 00:14:29.375 6.456 - 6.483: 99.4696% ( 1) 00:14:29.375 6.511 - 6.539: 99.4818% ( 2) 00:14:29.375 6.567 - 6.595: 99.4879% ( 1) 00:14:29.375 6.650 - 6.678: 99.4940% ( 1) 00:14:29.375 7.040 - 7.068: 99.5001% ( 1) 00:14:29.375 7.290 - 7.346: 99.5062% ( 1) 00:14:29.375 7.402 - 7.457: 99.5123% ( 1) 00:14:29.375 7.457 - 7.513: 99.5184% ( 1) 00:14:29.375 9.071 - 9.127: 99.5245% ( 1) 00:14:29.375 40.070 - 40.292: 99.5306% ( 1) 00:14:29.375 2806.650 - 2820.897: 99.5367% ( 1) 00:14:29.375 3989.148 - 4017.642: 99.9939% ( 75) 00:14:29.375 4188.605 - 4217.099: 100.0000% ( 1) 00:14:29.375 00:14:29.375 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:29.375 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:29.375 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:29.375 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:29.375 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:29.635 [ 00:14:29.635 { 00:14:29.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:29.635 "subtype": "Discovery", 00:14:29.635 "listen_addresses": [], 00:14:29.635 "allow_any_host": true, 00:14:29.635 "hosts": [] 00:14:29.635 }, 00:14:29.635 { 00:14:29.635 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:29.635 "subtype": "NVMe", 00:14:29.635 "listen_addresses": [ 00:14:29.635 { 00:14:29.635 "trtype": "VFIOUSER", 00:14:29.635 "adrfam": "IPv4", 00:14:29.635 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:29.635 "trsvcid": "0" 00:14:29.635 } 00:14:29.635 ], 00:14:29.635 "allow_any_host": true, 00:14:29.635 "hosts": [], 00:14:29.635 "serial_number": "SPDK1", 00:14:29.635 "model_number": "SPDK bdev Controller", 00:14:29.635 "max_namespaces": 32, 00:14:29.635 "min_cntlid": 1, 00:14:29.635 "max_cntlid": 65519, 00:14:29.635 "namespaces": [ 00:14:29.635 { 00:14:29.635 "nsid": 1, 00:14:29.635 "bdev_name": "Malloc1", 00:14:29.635 "name": "Malloc1", 00:14:29.635 "nguid": "F343E7BCCB7749F0B9123A1AACB1956E", 00:14:29.635 "uuid": "f343e7bc-cb77-49f0-b912-3a1aacb1956e" 00:14:29.635 } 00:14:29.635 ] 00:14:29.635 }, 00:14:29.635 { 00:14:29.635 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:29.635 "subtype": "NVMe", 00:14:29.635 "listen_addresses": [ 00:14:29.635 { 00:14:29.635 "trtype": "VFIOUSER", 00:14:29.635 "adrfam": "IPv4", 00:14:29.635 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:29.635 "trsvcid": "0" 00:14:29.635 } 00:14:29.635 ], 00:14:29.635 "allow_any_host": true, 00:14:29.635 "hosts": [], 00:14:29.635 "serial_number": "SPDK2", 00:14:29.635 "model_number": "SPDK bdev Controller", 00:14:29.635 "max_namespaces": 32, 00:14:29.635 "min_cntlid": 1, 00:14:29.635 "max_cntlid": 65519, 00:14:29.635 "namespaces": [ 00:14:29.635 { 00:14:29.635 "nsid": 1, 00:14:29.635 "bdev_name": "Malloc2", 00:14:29.635 "name": "Malloc2", 00:14:29.635 "nguid": "6FCABE62894C4635959082F7F0A117CF", 00:14:29.635 "uuid": "6fcabe62-894c-4635-9590-82f7f0a117cf" 00:14:29.635 } 00:14:29.635 ] 00:14:29.635 } 00:14:29.635 ] 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=407388 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:29.635 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:29.895 [2024-11-20 12:24:12.836386] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.895 Malloc3 00:14:29.895 12:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:30.154 [2024-11-20 12:24:13.079126] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.154 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:30.154 Asynchronous Event Request test 00:14:30.154 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.154 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.154 Registering asynchronous event callbacks... 00:14:30.154 Starting namespace attribute notice tests for all controllers... 00:14:30.154 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:30.154 aer_cb - Changed Namespace 00:14:30.154 Cleaning up... 00:14:30.414 [ 00:14:30.414 { 00:14:30.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:30.414 "subtype": "Discovery", 00:14:30.414 "listen_addresses": [], 00:14:30.414 "allow_any_host": true, 00:14:30.414 "hosts": [] 00:14:30.414 }, 00:14:30.414 { 00:14:30.414 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:30.414 "subtype": "NVMe", 00:14:30.414 "listen_addresses": [ 00:14:30.414 { 00:14:30.414 "trtype": "VFIOUSER", 00:14:30.414 "adrfam": "IPv4", 00:14:30.414 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:30.414 "trsvcid": "0" 00:14:30.414 } 00:14:30.414 ], 00:14:30.414 "allow_any_host": true, 00:14:30.414 "hosts": [], 00:14:30.414 "serial_number": "SPDK1", 00:14:30.414 "model_number": "SPDK bdev Controller", 00:14:30.414 "max_namespaces": 32, 00:14:30.414 "min_cntlid": 1, 00:14:30.414 "max_cntlid": 65519, 00:14:30.414 "namespaces": [ 00:14:30.414 { 00:14:30.414 "nsid": 1, 00:14:30.414 "bdev_name": "Malloc1", 00:14:30.414 "name": "Malloc1", 00:14:30.414 "nguid": "F343E7BCCB7749F0B9123A1AACB1956E", 00:14:30.414 "uuid": "f343e7bc-cb77-49f0-b912-3a1aacb1956e" 00:14:30.414 }, 00:14:30.414 { 00:14:30.414 "nsid": 2, 00:14:30.414 "bdev_name": "Malloc3", 00:14:30.414 "name": "Malloc3", 00:14:30.414 "nguid": "7D19FC2587434C9B9D40F9485921AF5A", 00:14:30.414 "uuid": "7d19fc25-8743-4c9b-9d40-f9485921af5a" 00:14:30.414 } 00:14:30.414 ] 00:14:30.414 }, 00:14:30.414 { 00:14:30.414 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:30.414 "subtype": "NVMe", 00:14:30.414 "listen_addresses": [ 00:14:30.414 { 00:14:30.414 "trtype": "VFIOUSER", 00:14:30.414 "adrfam": "IPv4", 00:14:30.414 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:30.414 "trsvcid": "0" 00:14:30.414 } 00:14:30.414 ], 00:14:30.414 "allow_any_host": true, 00:14:30.414 "hosts": [], 00:14:30.414 "serial_number": "SPDK2", 00:14:30.414 "model_number": "SPDK bdev Controller", 00:14:30.414 "max_namespaces": 32, 00:14:30.414 "min_cntlid": 1, 00:14:30.414 "max_cntlid": 65519, 00:14:30.414 "namespaces": [ 00:14:30.414 { 00:14:30.414 "nsid": 1, 00:14:30.414 "bdev_name": "Malloc2", 00:14:30.414 "name": "Malloc2", 00:14:30.414 "nguid": "6FCABE62894C4635959082F7F0A117CF", 00:14:30.414 "uuid": "6fcabe62-894c-4635-9590-82f7f0a117cf" 00:14:30.414 } 00:14:30.414 ] 00:14:30.414 } 00:14:30.414 ] 00:14:30.414 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 407388 00:14:30.414 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.414 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:30.414 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:30.414 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:30.414 [2024-11-20 12:24:13.345400] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:30.414 [2024-11-20 12:24:13.345433] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407615 ] 00:14:30.414 [2024-11-20 12:24:13.387100] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:30.414 [2024-11-20 12:24:13.397189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:30.414 [2024-11-20 12:24:13.397214] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f64fd86a000 00:14:30.414 [2024-11-20 12:24:13.398195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.399198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.400200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.401204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.405952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.406245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.407260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.408269] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:30.414 [2024-11-20 12:24:13.409278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:30.414 [2024-11-20 12:24:13.409290] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f64fd85f000 00:14:30.414 [2024-11-20 12:24:13.410233] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:30.414 [2024-11-20 12:24:13.419752] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:30.414 [2024-11-20 12:24:13.419778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:30.414 [2024-11-20 12:24:13.424859] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:30.414 [2024-11-20 12:24:13.424900] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:30.414 [2024-11-20 12:24:13.424974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:30.414 [2024-11-20 12:24:13.424986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:30.414 [2024-11-20 12:24:13.424992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:30.414 [2024-11-20 12:24:13.425861] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:30.414 [2024-11-20 12:24:13.425870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:30.414 [2024-11-20 12:24:13.425877] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:30.414 [2024-11-20 12:24:13.426866] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:30.414 [2024-11-20 12:24:13.426874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:30.414 [2024-11-20 12:24:13.426884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:30.414 [2024-11-20 12:24:13.427871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:30.414 [2024-11-20 12:24:13.427881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:30.414 [2024-11-20 12:24:13.428885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:30.414 [2024-11-20 12:24:13.428893] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:30.414 [2024-11-20 12:24:13.428897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:30.414 [2024-11-20 12:24:13.428904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:30.414 [2024-11-20 12:24:13.429011] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:30.415 [2024-11-20 12:24:13.429016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:30.415 [2024-11-20 12:24:13.429021] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:30.415 [2024-11-20 12:24:13.429890] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:30.415 [2024-11-20 12:24:13.430892] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:30.415 [2024-11-20 12:24:13.431902] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:30.415 [2024-11-20 12:24:13.432906] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:30.415 [2024-11-20 12:24:13.432944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:30.415 [2024-11-20 12:24:13.433916] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:30.415 [2024-11-20 12:24:13.433924] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:30.415 [2024-11-20 12:24:13.433929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.433946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:30.415 [2024-11-20 12:24:13.433957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.433969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:30.415 [2024-11-20 12:24:13.433973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.415 [2024-11-20 12:24:13.433977] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.415 [2024-11-20 12:24:13.433988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.441955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.441968] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:30.415 [2024-11-20 12:24:13.441973] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:30.415 [2024-11-20 12:24:13.441977] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:30.415 [2024-11-20 12:24:13.441981] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:30.415 [2024-11-20 12:24:13.441988] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:30.415 [2024-11-20 12:24:13.441992] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:30.415 [2024-11-20 12:24:13.441996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.442005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.442014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.449954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.449965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.415 [2024-11-20 12:24:13.449973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.415 [2024-11-20 12:24:13.449981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.415 [2024-11-20 12:24:13.449988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.415 [2024-11-20 12:24:13.449992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.449998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.450007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.457954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.457964] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:30.415 [2024-11-20 12:24:13.457969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.457975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.457980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.457988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.465953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.466008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.466019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.466026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:30.415 [2024-11-20 12:24:13.466030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:30.415 [2024-11-20 12:24:13.466034] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.415 [2024-11-20 12:24:13.466040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.473951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.473961] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:30.415 [2024-11-20 12:24:13.473973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.473980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.473986] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:30.415 [2024-11-20 12:24:13.473990] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.415 [2024-11-20 12:24:13.473993] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.415 [2024-11-20 12:24:13.473999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.481953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.481967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.481974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.481981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:30.415 [2024-11-20 12:24:13.481985] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.415 [2024-11-20 12:24:13.481988] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.415 [2024-11-20 12:24:13.481994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.489956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.489965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.489972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.489979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.489984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.489989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.489995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.490000] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:30.415 [2024-11-20 12:24:13.490004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:30.415 [2024-11-20 12:24:13.490009] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:30.415 [2024-11-20 12:24:13.490023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.497952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.497965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.505951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.505963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:30.415 [2024-11-20 12:24:13.513951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:30.415 [2024-11-20 12:24:13.513964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:30.416 [2024-11-20 12:24:13.521952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:30.416 [2024-11-20 12:24:13.521969] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:30.416 [2024-11-20 12:24:13.521974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:30.416 [2024-11-20 12:24:13.521977] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:30.416 [2024-11-20 12:24:13.521980] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:30.416 [2024-11-20 12:24:13.521983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:30.416 [2024-11-20 12:24:13.521989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:30.416 [2024-11-20 12:24:13.521996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:30.416 [2024-11-20 12:24:13.522000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:30.416 [2024-11-20 12:24:13.522003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.416 [2024-11-20 12:24:13.522008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:30.416 [2024-11-20 12:24:13.522015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:30.416 [2024-11-20 12:24:13.522019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:30.416 [2024-11-20 12:24:13.522022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.416 [2024-11-20 12:24:13.522027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:30.416 [2024-11-20 12:24:13.522034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:30.416 [2024-11-20 12:24:13.522040] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:30.416 [2024-11-20 12:24:13.522043] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:30.416 [2024-11-20 12:24:13.522048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:30.676 [2024-11-20 12:24:13.529954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:30.676 [2024-11-20 12:24:13.529968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:30.676 [2024-11-20 12:24:13.529977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:30.676 [2024-11-20 12:24:13.529983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:30.676 ===================================================== 00:14:30.676 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:30.676 ===================================================== 00:14:30.676 Controller Capabilities/Features 00:14:30.676 ================================ 00:14:30.676 Vendor ID: 4e58 00:14:30.676 Subsystem Vendor ID: 4e58 00:14:30.676 Serial Number: SPDK2 00:14:30.676 Model Number: SPDK bdev Controller 00:14:30.676 Firmware Version: 25.01 00:14:30.676 Recommended Arb Burst: 6 00:14:30.676 IEEE OUI Identifier: 8d 6b 50 00:14:30.676 Multi-path I/O 00:14:30.676 May have multiple subsystem ports: Yes 00:14:30.676 May have multiple controllers: Yes 00:14:30.676 Associated with SR-IOV VF: No 00:14:30.676 Max Data Transfer Size: 131072 00:14:30.676 Max Number of Namespaces: 32 00:14:30.676 Max Number of I/O Queues: 127 00:14:30.676 NVMe Specification Version (VS): 1.3 00:14:30.676 NVMe Specification Version (Identify): 1.3 00:14:30.676 Maximum Queue Entries: 256 00:14:30.676 Contiguous Queues Required: Yes 00:14:30.676 Arbitration Mechanisms Supported 00:14:30.676 Weighted Round Robin: Not Supported 00:14:30.676 Vendor Specific: Not Supported 00:14:30.676 Reset Timeout: 15000 ms 00:14:30.676 Doorbell Stride: 4 bytes 00:14:30.676 NVM Subsystem Reset: Not Supported 00:14:30.676 Command Sets Supported 00:14:30.676 NVM Command Set: Supported 00:14:30.676 Boot Partition: Not Supported 00:14:30.676 Memory Page Size Minimum: 4096 bytes 00:14:30.676 Memory Page Size Maximum: 4096 bytes 00:14:30.676 Persistent Memory Region: Not Supported 00:14:30.676 Optional Asynchronous Events Supported 00:14:30.676 Namespace Attribute Notices: Supported 00:14:30.676 Firmware Activation Notices: Not Supported 00:14:30.676 ANA Change Notices: Not Supported 00:14:30.676 PLE Aggregate Log Change Notices: Not Supported 00:14:30.676 LBA Status Info Alert Notices: Not Supported 00:14:30.676 EGE Aggregate Log Change Notices: Not Supported 00:14:30.676 Normal NVM Subsystem Shutdown event: Not Supported 00:14:30.676 Zone Descriptor Change Notices: Not Supported 00:14:30.676 Discovery Log Change Notices: Not Supported 00:14:30.676 Controller Attributes 00:14:30.676 128-bit Host Identifier: Supported 00:14:30.676 Non-Operational Permissive Mode: Not Supported 00:14:30.676 NVM Sets: Not Supported 00:14:30.676 Read Recovery Levels: Not Supported 00:14:30.676 Endurance Groups: Not Supported 00:14:30.676 Predictable Latency Mode: Not Supported 00:14:30.676 Traffic Based Keep ALive: Not Supported 00:14:30.676 Namespace Granularity: Not Supported 00:14:30.676 SQ Associations: Not Supported 00:14:30.676 UUID List: Not Supported 00:14:30.676 Multi-Domain Subsystem: Not Supported 00:14:30.676 Fixed Capacity Management: Not Supported 00:14:30.676 Variable Capacity Management: Not Supported 00:14:30.676 Delete Endurance Group: Not Supported 00:14:30.676 Delete NVM Set: Not Supported 00:14:30.676 Extended LBA Formats Supported: Not Supported 00:14:30.676 Flexible Data Placement Supported: Not Supported 00:14:30.676 00:14:30.676 Controller Memory Buffer Support 00:14:30.676 ================================ 00:14:30.676 Supported: No 00:14:30.676 00:14:30.676 Persistent Memory Region Support 00:14:30.676 ================================ 00:14:30.676 Supported: No 00:14:30.676 00:14:30.676 Admin Command Set Attributes 00:14:30.676 ============================ 00:14:30.676 Security Send/Receive: Not Supported 00:14:30.676 Format NVM: Not Supported 00:14:30.676 Firmware Activate/Download: Not Supported 00:14:30.676 Namespace Management: Not Supported 00:14:30.676 Device Self-Test: Not Supported 00:14:30.676 Directives: Not Supported 00:14:30.676 NVMe-MI: Not Supported 00:14:30.676 Virtualization Management: Not Supported 00:14:30.676 Doorbell Buffer Config: Not Supported 00:14:30.676 Get LBA Status Capability: Not Supported 00:14:30.676 Command & Feature Lockdown Capability: Not Supported 00:14:30.676 Abort Command Limit: 4 00:14:30.676 Async Event Request Limit: 4 00:14:30.676 Number of Firmware Slots: N/A 00:14:30.676 Firmware Slot 1 Read-Only: N/A 00:14:30.676 Firmware Activation Without Reset: N/A 00:14:30.676 Multiple Update Detection Support: N/A 00:14:30.676 Firmware Update Granularity: No Information Provided 00:14:30.676 Per-Namespace SMART Log: No 00:14:30.676 Asymmetric Namespace Access Log Page: Not Supported 00:14:30.676 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:30.676 Command Effects Log Page: Supported 00:14:30.676 Get Log Page Extended Data: Supported 00:14:30.676 Telemetry Log Pages: Not Supported 00:14:30.676 Persistent Event Log Pages: Not Supported 00:14:30.676 Supported Log Pages Log Page: May Support 00:14:30.676 Commands Supported & Effects Log Page: Not Supported 00:14:30.676 Feature Identifiers & Effects Log Page:May Support 00:14:30.676 NVMe-MI Commands & Effects Log Page: May Support 00:14:30.676 Data Area 4 for Telemetry Log: Not Supported 00:14:30.676 Error Log Page Entries Supported: 128 00:14:30.676 Keep Alive: Supported 00:14:30.676 Keep Alive Granularity: 10000 ms 00:14:30.676 00:14:30.676 NVM Command Set Attributes 00:14:30.676 ========================== 00:14:30.676 Submission Queue Entry Size 00:14:30.676 Max: 64 00:14:30.677 Min: 64 00:14:30.677 Completion Queue Entry Size 00:14:30.677 Max: 16 00:14:30.677 Min: 16 00:14:30.677 Number of Namespaces: 32 00:14:30.677 Compare Command: Supported 00:14:30.677 Write Uncorrectable Command: Not Supported 00:14:30.677 Dataset Management Command: Supported 00:14:30.677 Write Zeroes Command: Supported 00:14:30.677 Set Features Save Field: Not Supported 00:14:30.677 Reservations: Not Supported 00:14:30.677 Timestamp: Not Supported 00:14:30.677 Copy: Supported 00:14:30.677 Volatile Write Cache: Present 00:14:30.677 Atomic Write Unit (Normal): 1 00:14:30.677 Atomic Write Unit (PFail): 1 00:14:30.677 Atomic Compare & Write Unit: 1 00:14:30.677 Fused Compare & Write: Supported 00:14:30.677 Scatter-Gather List 00:14:30.677 SGL Command Set: Supported (Dword aligned) 00:14:30.677 SGL Keyed: Not Supported 00:14:30.677 SGL Bit Bucket Descriptor: Not Supported 00:14:30.677 SGL Metadata Pointer: Not Supported 00:14:30.677 Oversized SGL: Not Supported 00:14:30.677 SGL Metadata Address: Not Supported 00:14:30.677 SGL Offset: Not Supported 00:14:30.677 Transport SGL Data Block: Not Supported 00:14:30.677 Replay Protected Memory Block: Not Supported 00:14:30.677 00:14:30.677 Firmware Slot Information 00:14:30.677 ========================= 00:14:30.677 Active slot: 1 00:14:30.677 Slot 1 Firmware Revision: 25.01 00:14:30.677 00:14:30.677 00:14:30.677 Commands Supported and Effects 00:14:30.677 ============================== 00:14:30.677 Admin Commands 00:14:30.677 -------------- 00:14:30.677 Get Log Page (02h): Supported 00:14:30.677 Identify (06h): Supported 00:14:30.677 Abort (08h): Supported 00:14:30.677 Set Features (09h): Supported 00:14:30.677 Get Features (0Ah): Supported 00:14:30.677 Asynchronous Event Request (0Ch): Supported 00:14:30.677 Keep Alive (18h): Supported 00:14:30.677 I/O Commands 00:14:30.677 ------------ 00:14:30.677 Flush (00h): Supported LBA-Change 00:14:30.677 Write (01h): Supported LBA-Change 00:14:30.677 Read (02h): Supported 00:14:30.677 Compare (05h): Supported 00:14:30.677 Write Zeroes (08h): Supported LBA-Change 00:14:30.677 Dataset Management (09h): Supported LBA-Change 00:14:30.677 Copy (19h): Supported LBA-Change 00:14:30.677 00:14:30.677 Error Log 00:14:30.677 ========= 00:14:30.677 00:14:30.677 Arbitration 00:14:30.677 =========== 00:14:30.677 Arbitration Burst: 1 00:14:30.677 00:14:30.677 Power Management 00:14:30.677 ================ 00:14:30.677 Number of Power States: 1 00:14:30.677 Current Power State: Power State #0 00:14:30.677 Power State #0: 00:14:30.677 Max Power: 0.00 W 00:14:30.677 Non-Operational State: Operational 00:14:30.677 Entry Latency: Not Reported 00:14:30.677 Exit Latency: Not Reported 00:14:30.677 Relative Read Throughput: 0 00:14:30.677 Relative Read Latency: 0 00:14:30.677 Relative Write Throughput: 0 00:14:30.677 Relative Write Latency: 0 00:14:30.677 Idle Power: Not Reported 00:14:30.677 Active Power: Not Reported 00:14:30.677 Non-Operational Permissive Mode: Not Supported 00:14:30.677 00:14:30.677 Health Information 00:14:30.677 ================== 00:14:30.677 Critical Warnings: 00:14:30.677 Available Spare Space: OK 00:14:30.677 Temperature: OK 00:14:30.677 Device Reliability: OK 00:14:30.677 Read Only: No 00:14:30.677 Volatile Memory Backup: OK 00:14:30.677 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:30.677 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:30.677 Available Spare: 0% 00:14:30.677 Available Sp[2024-11-20 12:24:13.530073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:30.677 [2024-11-20 12:24:13.537952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:30.677 [2024-11-20 12:24:13.537978] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:30.677 [2024-11-20 12:24:13.537986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.677 [2024-11-20 12:24:13.537992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.677 [2024-11-20 12:24:13.537998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.677 [2024-11-20 12:24:13.538003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.677 [2024-11-20 12:24:13.538057] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:30.677 [2024-11-20 12:24:13.538067] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:30.677 [2024-11-20 12:24:13.539061] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:30.677 [2024-11-20 12:24:13.539106] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:30.677 [2024-11-20 12:24:13.539112] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:30.677 [2024-11-20 12:24:13.540064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:30.677 [2024-11-20 12:24:13.540075] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:30.677 [2024-11-20 12:24:13.540121] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:30.677 [2024-11-20 12:24:13.541098] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:30.677 are Threshold: 0% 00:14:30.677 Life Percentage Used: 0% 00:14:30.677 Data Units Read: 0 00:14:30.677 Data Units Written: 0 00:14:30.677 Host Read Commands: 0 00:14:30.677 Host Write Commands: 0 00:14:30.677 Controller Busy Time: 0 minutes 00:14:30.677 Power Cycles: 0 00:14:30.677 Power On Hours: 0 hours 00:14:30.677 Unsafe Shutdowns: 0 00:14:30.677 Unrecoverable Media Errors: 0 00:14:30.677 Lifetime Error Log Entries: 0 00:14:30.677 Warning Temperature Time: 0 minutes 00:14:30.677 Critical Temperature Time: 0 minutes 00:14:30.677 00:14:30.677 Number of Queues 00:14:30.677 ================ 00:14:30.677 Number of I/O Submission Queues: 127 00:14:30.677 Number of I/O Completion Queues: 127 00:14:30.677 00:14:30.677 Active Namespaces 00:14:30.677 ================= 00:14:30.677 Namespace ID:1 00:14:30.677 Error Recovery Timeout: Unlimited 00:14:30.677 Command Set Identifier: NVM (00h) 00:14:30.677 Deallocate: Supported 00:14:30.677 Deallocated/Unwritten Error: Not Supported 00:14:30.677 Deallocated Read Value: Unknown 00:14:30.678 Deallocate in Write Zeroes: Not Supported 00:14:30.678 Deallocated Guard Field: 0xFFFF 00:14:30.678 Flush: Supported 00:14:30.678 Reservation: Supported 00:14:30.678 Namespace Sharing Capabilities: Multiple Controllers 00:14:30.678 Size (in LBAs): 131072 (0GiB) 00:14:30.678 Capacity (in LBAs): 131072 (0GiB) 00:14:30.678 Utilization (in LBAs): 131072 (0GiB) 00:14:30.678 NGUID: 6FCABE62894C4635959082F7F0A117CF 00:14:30.678 UUID: 6fcabe62-894c-4635-9590-82f7f0a117cf 00:14:30.678 Thin Provisioning: Not Supported 00:14:30.678 Per-NS Atomic Units: Yes 00:14:30.678 Atomic Boundary Size (Normal): 0 00:14:30.678 Atomic Boundary Size (PFail): 0 00:14:30.678 Atomic Boundary Offset: 0 00:14:30.678 Maximum Single Source Range Length: 65535 00:14:30.678 Maximum Copy Length: 65535 00:14:30.678 Maximum Source Range Count: 1 00:14:30.678 NGUID/EUI64 Never Reused: No 00:14:30.678 Namespace Write Protected: No 00:14:30.678 Number of LBA Formats: 1 00:14:30.678 Current LBA Format: LBA Format #00 00:14:30.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:30.678 00:14:30.678 12:24:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:30.678 [2024-11-20 12:24:13.768327] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:35.949 Initializing NVMe Controllers 00:14:35.949 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:35.949 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:35.949 Initialization complete. Launching workers. 00:14:35.949 ======================================================== 00:14:35.949 Latency(us) 00:14:35.949 Device Information : IOPS MiB/s Average min max 00:14:35.949 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.53 156.06 3203.78 960.80 8603.11 00:14:35.949 ======================================================== 00:14:35.949 Total : 39950.53 156.06 3203.78 960.80 8603.11 00:14:35.949 00:14:35.949 [2024-11-20 12:24:18.875213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:35.949 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:36.207 [2024-11-20 12:24:19.113910] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:41.642 Initializing NVMe Controllers 00:14:41.642 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:41.642 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:41.642 Initialization complete. Launching workers. 00:14:41.642 ======================================================== 00:14:41.642 Latency(us) 00:14:41.642 Device Information : IOPS MiB/s Average min max 00:14:41.642 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39917.60 155.93 3206.80 965.11 7459.87 00:14:41.642 ======================================================== 00:14:41.642 Total : 39917.60 155.93 3206.80 965.11 7459.87 00:14:41.642 00:14:41.642 [2024-11-20 12:24:24.139003] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:41.642 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:41.642 [2024-11-20 12:24:24.343470] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.916 [2024-11-20 12:24:29.487039] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.916 Initializing NVMe Controllers 00:14:46.916 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:46.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:46.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:46.916 Initialization complete. Launching workers. 00:14:46.916 Starting thread on core 2 00:14:46.916 Starting thread on core 3 00:14:46.916 Starting thread on core 1 00:14:46.916 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:46.916 [2024-11-20 12:24:29.784966] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.205 [2024-11-20 12:24:32.872170] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.205 Initializing NVMe Controllers 00:14:50.205 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.205 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.205 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:50.205 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:50.205 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:50.205 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:50.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:50.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:50.205 Initialization complete. Launching workers. 00:14:50.205 Starting thread on core 1 with urgent priority queue 00:14:50.205 Starting thread on core 2 with urgent priority queue 00:14:50.205 Starting thread on core 3 with urgent priority queue 00:14:50.205 Starting thread on core 0 with urgent priority queue 00:14:50.205 SPDK bdev Controller (SPDK2 ) core 0: 8233.67 IO/s 12.15 secs/100000 ios 00:14:50.205 SPDK bdev Controller (SPDK2 ) core 1: 7986.33 IO/s 12.52 secs/100000 ios 00:14:50.205 SPDK bdev Controller (SPDK2 ) core 2: 7485.33 IO/s 13.36 secs/100000 ios 00:14:50.205 SPDK bdev Controller (SPDK2 ) core 3: 10018.33 IO/s 9.98 secs/100000 ios 00:14:50.205 ======================================================== 00:14:50.205 00:14:50.205 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:50.205 [2024-11-20 12:24:33.157791] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.205 Initializing NVMe Controllers 00:14:50.205 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.205 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:50.205 Namespace ID: 1 size: 0GB 00:14:50.205 Initialization complete. 00:14:50.205 INFO: using host memory buffer for IO 00:14:50.205 Hello world! 00:14:50.205 [2024-11-20 12:24:33.167858] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.205 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:50.464 [2024-11-20 12:24:33.441020] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.843 Initializing NVMe Controllers 00:14:51.843 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.843 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.843 Initialization complete. Launching workers. 00:14:51.843 submit (in ns) avg, min, max = 6223.4, 3256.5, 4000255.7 00:14:51.843 complete (in ns) avg, min, max = 22690.3, 1793.0, 4000247.8 00:14:51.843 00:14:51.843 Submit histogram 00:14:51.843 ================ 00:14:51.843 Range in us Cumulative Count 00:14:51.843 3.256 - 3.270: 0.0186% ( 3) 00:14:51.843 3.283 - 3.297: 0.0682% ( 8) 00:14:51.843 3.297 - 3.311: 0.2171% ( 24) 00:14:51.843 3.311 - 3.325: 0.5953% ( 61) 00:14:51.843 3.325 - 3.339: 1.7798% ( 191) 00:14:51.843 3.339 - 3.353: 5.4078% ( 585) 00:14:51.843 3.353 - 3.367: 10.5054% ( 822) 00:14:51.843 3.367 - 3.381: 16.5147% ( 969) 00:14:51.843 3.381 - 3.395: 22.9953% ( 1045) 00:14:51.843 3.395 - 3.409: 29.0853% ( 982) 00:14:51.843 3.409 - 3.423: 34.4124% ( 859) 00:14:51.843 3.423 - 3.437: 39.6775% ( 849) 00:14:51.843 3.437 - 3.450: 45.1039% ( 875) 00:14:51.843 3.450 - 3.464: 49.3271% ( 681) 00:14:51.843 3.464 - 3.478: 53.2341% ( 630) 00:14:51.843 3.478 - 3.492: 58.1643% ( 795) 00:14:51.843 3.492 - 3.506: 65.0171% ( 1105) 00:14:51.843 3.506 - 3.520: 69.8357% ( 777) 00:14:51.843 3.520 - 3.534: 74.2202% ( 707) 00:14:51.843 3.534 - 3.548: 79.0574% ( 780) 00:14:51.843 3.548 - 3.562: 82.5240% ( 559) 00:14:51.843 3.562 - 3.590: 86.1023% ( 577) 00:14:51.843 3.590 - 3.617: 86.9333% ( 134) 00:14:51.843 3.617 - 3.645: 87.9690% ( 167) 00:14:51.843 3.645 - 3.673: 89.4822% ( 244) 00:14:51.843 3.673 - 3.701: 91.4667% ( 320) 00:14:51.843 3.701 - 3.729: 93.3023% ( 296) 00:14:51.843 3.729 - 3.757: 95.1504% ( 298) 00:14:51.843 3.757 - 3.784: 96.6636% ( 244) 00:14:51.843 3.784 - 3.812: 97.9721% ( 211) 00:14:51.843 3.812 - 3.840: 98.6605% ( 111) 00:14:51.843 3.840 - 3.868: 99.0698% ( 66) 00:14:51.843 3.868 - 3.896: 99.3736% ( 49) 00:14:51.843 3.896 - 3.923: 99.5287% ( 25) 00:14:51.843 3.923 - 3.951: 99.6031% ( 12) 00:14:51.843 3.951 - 3.979: 99.6403% ( 6) 00:14:51.843 3.979 - 4.007: 99.6465% ( 1) 00:14:51.843 4.146 - 4.174: 99.6527% ( 1) 00:14:51.843 4.230 - 4.257: 99.6589% ( 1) 00:14:51.843 5.454 - 5.482: 99.6651% ( 1) 00:14:51.843 5.482 - 5.510: 99.6713% ( 1) 00:14:51.843 5.510 - 5.537: 99.6775% ( 1) 00:14:51.843 5.537 - 5.565: 99.6837% ( 1) 00:14:51.843 5.593 - 5.621: 99.6899% ( 1) 00:14:51.843 5.704 - 5.732: 99.6961% ( 1) 00:14:51.843 6.177 - 6.205: 99.7023% ( 1) 00:14:51.843 6.317 - 6.344: 99.7085% ( 1) 00:14:51.843 6.344 - 6.372: 99.7147% ( 1) 00:14:51.843 6.372 - 6.400: 99.7209% ( 1) 00:14:51.843 6.400 - 6.428: 99.7271% ( 1) 00:14:51.843 6.483 - 6.511: 99.7333% ( 1) 00:14:51.843 6.539 - 6.567: 99.7395% ( 1) 00:14:51.843 6.595 - 6.623: 99.7457% ( 1) 00:14:51.843 6.678 - 6.706: 99.7519% ( 1) 00:14:51.843 6.734 - 6.762: 99.7581% ( 1) 00:14:51.843 6.762 - 6.790: 99.7705% ( 2) 00:14:51.843 6.790 - 6.817: 99.7767% ( 1) 00:14:51.843 6.817 - 6.845: 99.7829% ( 1) 00:14:51.843 6.901 - 6.929: 99.7891% ( 1) 00:14:51.843 6.957 - 6.984: 99.7953% ( 1) 00:14:51.843 7.123 - 7.179: 99.8016% ( 1) 00:14:51.843 7.235 - 7.290: 99.8078% ( 1) 00:14:51.843 7.290 - 7.346: 99.8140% ( 1) 00:14:51.843 7.791 - 7.847: 99.8202% ( 1) 00:14:51.843 7.847 - 7.903: 99.8326% ( 2) 00:14:51.843 8.014 - 8.070: 99.8388% ( 1) 00:14:51.843 8.070 - 8.125: 99.8636% ( 4) 00:14:51.843 8.125 - 8.181: 99.8698% ( 1) 00:14:51.843 8.403 - 8.459: 99.8760% ( 1) 00:14:51.843 8.570 - 8.626: 99.8822% ( 1) 00:14:51.843 8.626 - 8.682: 99.8884% ( 1) 00:14:51.843 9.071 - 9.127: 99.8946% ( 1) 00:14:51.843 9.238 - 9.294: 99.9008% ( 1) 00:14:51.843 9.350 - 9.405: 99.9070% ( 1) 00:14:51.843 9.628 - 9.683: 99.9132% ( 1) 00:14:51.843 10.129 - 10.184: 99.9194% ( 1) 00:14:51.843 12.466 - 12.522: 99.9256% ( 1) 00:14:51.843 14.080 - 14.136: 99.9318% ( 1) 00:14:51.844 3989.148 - 4017.642: 100.0000% ( 11) 00:14:51.844 00:14:51.844 [2024-11-20 12:24:34.537008] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.844 Complete histogram 00:14:51.844 ================== 00:14:51.844 Range in us Cumulative Count 00:14:51.844 1.781 - 1.795: 0.0062% ( 1) 00:14:51.844 1.795 - 1.809: 0.0496% ( 7) 00:14:51.844 1.809 - 1.823: 0.2481% ( 32) 00:14:51.844 1.823 - 1.837: 0.9426% ( 112) 00:14:51.844 1.837 - 1.850: 2.4682% ( 246) 00:14:51.844 1.850 - 1.864: 19.9132% ( 2813) 00:14:51.844 1.864 - 1.878: 75.5845% ( 8977) 00:14:51.844 1.878 - 1.892: 89.4450% ( 2235) 00:14:51.844 1.892 - 1.906: 95.0822% ( 909) 00:14:51.844 1.906 - 1.920: 96.5333% ( 234) 00:14:51.844 1.920 - 1.934: 97.4202% ( 143) 00:14:51.844 1.934 - 1.948: 98.3814% ( 155) 00:14:51.844 1.948 - 1.962: 98.9829% ( 97) 00:14:51.844 1.962 - 1.976: 99.1442% ( 26) 00:14:51.844 1.976 - 1.990: 99.2062% ( 10) 00:14:51.844 1.990 - 2.003: 99.2124% ( 1) 00:14:51.844 2.003 - 2.017: 99.2248% ( 2) 00:14:51.844 2.017 - 2.031: 99.2310% ( 1) 00:14:51.844 2.031 - 2.045: 99.2372% ( 1) 00:14:51.844 2.045 - 2.059: 99.2682% ( 5) 00:14:51.844 2.059 - 2.073: 99.2992% ( 5) 00:14:51.844 2.073 - 2.087: 99.3054% ( 1) 00:14:51.844 2.087 - 2.101: 99.3178% ( 2) 00:14:51.844 2.101 - 2.115: 99.3240% ( 1) 00:14:51.844 2.157 - 2.170: 99.3302% ( 1) 00:14:51.844 2.184 - 2.198: 99.3364% ( 1) 00:14:51.844 2.268 - 2.282: 99.3426% ( 1) 00:14:51.844 2.323 - 2.337: 99.3488% ( 1) 00:14:51.844 2.379 - 2.393: 99.3550% ( 1) 00:14:51.844 3.311 - 3.325: 99.3612% ( 1) 00:14:51.844 4.035 - 4.063: 99.3674% ( 1) 00:14:51.844 4.647 - 4.675: 99.3736% ( 1) 00:14:51.844 4.814 - 4.842: 99.3860% ( 2) 00:14:51.844 4.897 - 4.925: 99.3922% ( 1) 00:14:51.844 5.315 - 5.343: 99.3984% ( 1) 00:14:51.844 5.510 - 5.537: 99.4047% ( 1) 00:14:51.844 5.677 - 5.704: 99.4109% ( 1) 00:14:51.844 6.400 - 6.428: 99.4171% ( 1) 00:14:51.844 6.456 - 6.483: 99.4233% ( 1) 00:14:51.844 6.511 - 6.539: 99.4295% ( 1) 00:14:51.844 6.734 - 6.762: 99.4357% ( 1) 00:14:51.844 6.762 - 6.790: 99.4419% ( 1) 00:14:51.844 6.790 - 6.817: 99.4481% ( 1) 00:14:51.844 6.817 - 6.845: 99.4543% ( 1) 00:14:51.844 6.984 - 7.012: 99.4605% ( 1) 00:14:51.844 7.012 - 7.040: 99.4667% ( 1) 00:14:51.844 7.235 - 7.290: 99.4729% ( 1) 00:14:51.844 8.181 - 8.237: 99.4791% ( 1) 00:14:51.844 3989.148 - 4017.642: 100.0000% ( 84) 00:14:51.844 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:51.844 [ 00:14:51.844 { 00:14:51.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:51.844 "subtype": "Discovery", 00:14:51.844 "listen_addresses": [], 00:14:51.844 "allow_any_host": true, 00:14:51.844 "hosts": [] 00:14:51.844 }, 00:14:51.844 { 00:14:51.844 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:51.844 "subtype": "NVMe", 00:14:51.844 "listen_addresses": [ 00:14:51.844 { 00:14:51.844 "trtype": "VFIOUSER", 00:14:51.844 "adrfam": "IPv4", 00:14:51.844 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:51.844 "trsvcid": "0" 00:14:51.844 } 00:14:51.844 ], 00:14:51.844 "allow_any_host": true, 00:14:51.844 "hosts": [], 00:14:51.844 "serial_number": "SPDK1", 00:14:51.844 "model_number": "SPDK bdev Controller", 00:14:51.844 "max_namespaces": 32, 00:14:51.844 "min_cntlid": 1, 00:14:51.844 "max_cntlid": 65519, 00:14:51.844 "namespaces": [ 00:14:51.844 { 00:14:51.844 "nsid": 1, 00:14:51.844 "bdev_name": "Malloc1", 00:14:51.844 "name": "Malloc1", 00:14:51.844 "nguid": "F343E7BCCB7749F0B9123A1AACB1956E", 00:14:51.844 "uuid": "f343e7bc-cb77-49f0-b912-3a1aacb1956e" 00:14:51.844 }, 00:14:51.844 { 00:14:51.844 "nsid": 2, 00:14:51.844 "bdev_name": "Malloc3", 00:14:51.844 "name": "Malloc3", 00:14:51.844 "nguid": "7D19FC2587434C9B9D40F9485921AF5A", 00:14:51.844 "uuid": "7d19fc25-8743-4c9b-9d40-f9485921af5a" 00:14:51.844 } 00:14:51.844 ] 00:14:51.844 }, 00:14:51.844 { 00:14:51.844 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:51.844 "subtype": "NVMe", 00:14:51.844 "listen_addresses": [ 00:14:51.844 { 00:14:51.844 "trtype": "VFIOUSER", 00:14:51.844 "adrfam": "IPv4", 00:14:51.844 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:51.844 "trsvcid": "0" 00:14:51.844 } 00:14:51.844 ], 00:14:51.844 "allow_any_host": true, 00:14:51.844 "hosts": [], 00:14:51.844 "serial_number": "SPDK2", 00:14:51.844 "model_number": "SPDK bdev Controller", 00:14:51.844 "max_namespaces": 32, 00:14:51.844 "min_cntlid": 1, 00:14:51.844 "max_cntlid": 65519, 00:14:51.844 "namespaces": [ 00:14:51.844 { 00:14:51.844 "nsid": 1, 00:14:51.844 "bdev_name": "Malloc2", 00:14:51.844 "name": "Malloc2", 00:14:51.844 "nguid": "6FCABE62894C4635959082F7F0A117CF", 00:14:51.844 "uuid": "6fcabe62-894c-4635-9590-82f7f0a117cf" 00:14:51.844 } 00:14:51.844 ] 00:14:51.844 } 00:14:51.844 ] 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=411077 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:51.844 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:51.844 [2024-11-20 12:24:34.936393] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.103 Malloc4 00:14:52.103 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:52.103 [2024-11-20 12:24:35.170171] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.103 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:52.103 Asynchronous Event Request test 00:14:52.103 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:52.103 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:52.103 Registering asynchronous event callbacks... 00:14:52.103 Starting namespace attribute notice tests for all controllers... 00:14:52.103 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:52.103 aer_cb - Changed Namespace 00:14:52.103 Cleaning up... 00:14:52.363 [ 00:14:52.363 { 00:14:52.363 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:52.363 "subtype": "Discovery", 00:14:52.363 "listen_addresses": [], 00:14:52.363 "allow_any_host": true, 00:14:52.363 "hosts": [] 00:14:52.363 }, 00:14:52.363 { 00:14:52.363 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:52.363 "subtype": "NVMe", 00:14:52.363 "listen_addresses": [ 00:14:52.363 { 00:14:52.363 "trtype": "VFIOUSER", 00:14:52.363 "adrfam": "IPv4", 00:14:52.363 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:52.363 "trsvcid": "0" 00:14:52.363 } 00:14:52.363 ], 00:14:52.363 "allow_any_host": true, 00:14:52.363 "hosts": [], 00:14:52.363 "serial_number": "SPDK1", 00:14:52.363 "model_number": "SPDK bdev Controller", 00:14:52.363 "max_namespaces": 32, 00:14:52.363 "min_cntlid": 1, 00:14:52.363 "max_cntlid": 65519, 00:14:52.363 "namespaces": [ 00:14:52.363 { 00:14:52.363 "nsid": 1, 00:14:52.363 "bdev_name": "Malloc1", 00:14:52.363 "name": "Malloc1", 00:14:52.363 "nguid": "F343E7BCCB7749F0B9123A1AACB1956E", 00:14:52.363 "uuid": "f343e7bc-cb77-49f0-b912-3a1aacb1956e" 00:14:52.363 }, 00:14:52.363 { 00:14:52.363 "nsid": 2, 00:14:52.363 "bdev_name": "Malloc3", 00:14:52.363 "name": "Malloc3", 00:14:52.363 "nguid": "7D19FC2587434C9B9D40F9485921AF5A", 00:14:52.363 "uuid": "7d19fc25-8743-4c9b-9d40-f9485921af5a" 00:14:52.363 } 00:14:52.363 ] 00:14:52.363 }, 00:14:52.363 { 00:14:52.363 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:52.363 "subtype": "NVMe", 00:14:52.363 "listen_addresses": [ 00:14:52.363 { 00:14:52.363 "trtype": "VFIOUSER", 00:14:52.363 "adrfam": "IPv4", 00:14:52.363 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:52.363 "trsvcid": "0" 00:14:52.363 } 00:14:52.363 ], 00:14:52.363 "allow_any_host": true, 00:14:52.363 "hosts": [], 00:14:52.363 "serial_number": "SPDK2", 00:14:52.363 "model_number": "SPDK bdev Controller", 00:14:52.363 "max_namespaces": 32, 00:14:52.363 "min_cntlid": 1, 00:14:52.363 "max_cntlid": 65519, 00:14:52.363 "namespaces": [ 00:14:52.363 { 00:14:52.363 "nsid": 1, 00:14:52.363 "bdev_name": "Malloc2", 00:14:52.363 "name": "Malloc2", 00:14:52.363 "nguid": "6FCABE62894C4635959082F7F0A117CF", 00:14:52.363 "uuid": "6fcabe62-894c-4635-9590-82f7f0a117cf" 00:14:52.363 }, 00:14:52.363 { 00:14:52.363 "nsid": 2, 00:14:52.363 "bdev_name": "Malloc4", 00:14:52.363 "name": "Malloc4", 00:14:52.363 "nguid": "45DF1BB78B2F4C5A9D6E07AAD736339B", 00:14:52.363 "uuid": "45df1bb7-8b2f-4c5a-9d6e-07aad736339b" 00:14:52.363 } 00:14:52.363 ] 00:14:52.363 } 00:14:52.363 ] 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 411077 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 402934 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 402934 ']' 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 402934 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402934 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402934' 00:14:52.364 killing process with pid 402934 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 402934 00:14:52.364 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 402934 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=411313 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 411313' 00:14:52.624 Process pid: 411313 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 411313 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 411313 ']' 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.624 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:52.883 [2024-11-20 12:24:35.750445] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:52.883 [2024-11-20 12:24:35.751350] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:52.884 [2024-11-20 12:24:35.751389] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.884 [2024-11-20 12:24:35.823904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.884 [2024-11-20 12:24:35.861360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.884 [2024-11-20 12:24:35.861398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.884 [2024-11-20 12:24:35.861406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.884 [2024-11-20 12:24:35.861412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.884 [2024-11-20 12:24:35.861417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.884 [2024-11-20 12:24:35.862883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.884 [2024-11-20 12:24:35.863004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.884 [2024-11-20 12:24:35.863035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.884 [2024-11-20 12:24:35.863035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.884 [2024-11-20 12:24:35.931153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:52.884 [2024-11-20 12:24:35.932012] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:52.884 [2024-11-20 12:24:35.932185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:52.884 [2024-11-20 12:24:35.932553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:52.884 [2024-11-20 12:24:35.932612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:52.884 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.884 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:52.884 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:54.264 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:54.264 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:54.264 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:54.264 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.264 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:54.264 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:54.264 Malloc1 00:14:54.523 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:54.523 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:54.782 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:55.041 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.041 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:55.041 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:55.301 Malloc2 00:14:55.301 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:55.560 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:55.560 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 411313 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 411313 ']' 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 411313 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411313 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411313' 00:14:55.820 killing process with pid 411313 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 411313 00:14:55.820 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 411313 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:56.080 00:14:56.080 real 0m50.926s 00:14:56.080 user 3m17.087s 00:14:56.080 sys 0m3.217s 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.080 ************************************ 00:14:56.080 END TEST nvmf_vfio_user 00:14:56.080 ************************************ 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.080 ************************************ 00:14:56.080 START TEST nvmf_vfio_user_nvme_compliance 00:14:56.080 ************************************ 00:14:56.080 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:56.340 * Looking for test storage... 00:14:56.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.340 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.341 --rc genhtml_branch_coverage=1 00:14:56.341 --rc genhtml_function_coverage=1 00:14:56.341 --rc genhtml_legend=1 00:14:56.341 --rc geninfo_all_blocks=1 00:14:56.341 --rc geninfo_unexecuted_blocks=1 00:14:56.341 00:14:56.341 ' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.341 --rc genhtml_branch_coverage=1 00:14:56.341 --rc genhtml_function_coverage=1 00:14:56.341 --rc genhtml_legend=1 00:14:56.341 --rc geninfo_all_blocks=1 00:14:56.341 --rc geninfo_unexecuted_blocks=1 00:14:56.341 00:14:56.341 ' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.341 --rc genhtml_branch_coverage=1 00:14:56.341 --rc genhtml_function_coverage=1 00:14:56.341 --rc genhtml_legend=1 00:14:56.341 --rc geninfo_all_blocks=1 00:14:56.341 --rc geninfo_unexecuted_blocks=1 00:14:56.341 00:14:56.341 ' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.341 --rc genhtml_branch_coverage=1 00:14:56.341 --rc genhtml_function_coverage=1 00:14:56.341 --rc genhtml_legend=1 00:14:56.341 --rc geninfo_all_blocks=1 00:14:56.341 --rc geninfo_unexecuted_blocks=1 00:14:56.341 00:14:56.341 ' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=411860 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 411860' 00:14:56.341 Process pid: 411860 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 411860 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 411860 ']' 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.341 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:56.341 [2024-11-20 12:24:39.404611] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:14:56.341 [2024-11-20 12:24:39.404662] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.601 [2024-11-20 12:24:39.482803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.601 [2024-11-20 12:24:39.523096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.601 [2024-11-20 12:24:39.523135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.601 [2024-11-20 12:24:39.523146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.601 [2024-11-20 12:24:39.523153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.601 [2024-11-20 12:24:39.523175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.601 [2024-11-20 12:24:39.524519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.601 [2024-11-20 12:24:39.524627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.601 [2024-11-20 12:24:39.524627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.601 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.601 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:56.601 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.539 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 malloc0 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.799 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:57.799 00:14:57.799 00:14:57.799 CUnit - A unit testing framework for C - Version 2.1-3 00:14:57.799 http://cunit.sourceforge.net/ 00:14:57.799 00:14:57.799 00:14:57.799 Suite: nvme_compliance 00:14:57.799 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 12:24:40.872371] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:57.799 [2024-11-20 12:24:40.873700] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:57.799 [2024-11-20 12:24:40.873716] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:57.799 [2024-11-20 12:24:40.873722] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:57.799 [2024-11-20 12:24:40.875388] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:57.799 passed 00:14:58.059 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 12:24:40.954982] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.059 [2024-11-20 12:24:40.959028] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.059 passed 00:14:58.059 Test: admin_identify_ns ...[2024-11-20 12:24:41.038028] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.059 [2024-11-20 12:24:41.099957] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:58.059 [2024-11-20 12:24:41.107964] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:58.059 [2024-11-20 12:24:41.129063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.059 passed 00:14:58.318 Test: admin_get_features_mandatory_features ...[2024-11-20 12:24:41.206304] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.318 [2024-11-20 12:24:41.209330] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.318 passed 00:14:58.318 Test: admin_get_features_optional_features ...[2024-11-20 12:24:41.289846] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.318 [2024-11-20 12:24:41.292865] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.318 passed 00:14:58.318 Test: admin_set_features_number_of_queues ...[2024-11-20 12:24:41.370775] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.578 [2024-11-20 12:24:41.476031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.578 passed 00:14:58.578 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 12:24:41.551201] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.578 [2024-11-20 12:24:41.554233] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.578 passed 00:14:58.578 Test: admin_get_log_page_with_lpo ...[2024-11-20 12:24:41.632153] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.837 [2024-11-20 12:24:41.700962] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:58.837 [2024-11-20 12:24:41.714043] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.837 passed 00:14:58.837 Test: fabric_property_get ...[2024-11-20 12:24:41.791188] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.837 [2024-11-20 12:24:41.792435] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:58.837 [2024-11-20 12:24:41.794207] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.837 passed 00:14:58.837 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 12:24:41.873724] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.837 [2024-11-20 12:24:41.874970] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:58.837 [2024-11-20 12:24:41.876749] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.837 passed 00:14:59.096 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 12:24:41.954729] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.096 [2024-11-20 12:24:42.038957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:59.096 [2024-11-20 12:24:42.054952] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:59.096 [2024-11-20 12:24:42.060034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.096 passed 00:14:59.096 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 12:24:42.135100] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.096 [2024-11-20 12:24:42.136350] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:59.096 [2024-11-20 12:24:42.138119] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.096 passed 00:14:59.356 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 12:24:42.216018] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.356 [2024-11-20 12:24:42.292956] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:59.356 [2024-11-20 12:24:42.316965] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:59.356 [2024-11-20 12:24:42.322038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.356 passed 00:14:59.356 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 12:24:42.399980] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.356 [2024-11-20 12:24:42.401206] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:59.356 [2024-11-20 12:24:42.401230] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:59.356 [2024-11-20 12:24:42.406020] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.356 passed 00:14:59.615 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 12:24:42.479917] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.615 [2024-11-20 12:24:42.571960] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:59.615 [2024-11-20 12:24:42.579953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:59.615 [2024-11-20 12:24:42.587960] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:59.615 [2024-11-20 12:24:42.595957] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:59.615 [2024-11-20 12:24:42.625040] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.615 passed 00:14:59.615 Test: admin_create_io_sq_verify_pc ...[2024-11-20 12:24:42.702134] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.615 [2024-11-20 12:24:42.718962] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:59.874 [2024-11-20 12:24:42.735302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.874 passed 00:14:59.874 Test: admin_create_io_qp_max_qps ...[2024-11-20 12:24:42.815842] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.811 [2024-11-20 12:24:43.916960] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:01.380 [2024-11-20 12:24:44.297319] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.380 passed 00:15:01.380 Test: admin_create_io_sq_shared_cq ...[2024-11-20 12:24:44.370403] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.639 [2024-11-20 12:24:44.501959] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:01.639 [2024-11-20 12:24:44.539006] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.639 passed 00:15:01.639 00:15:01.639 Run Summary: Type Total Ran Passed Failed Inactive 00:15:01.639 suites 1 1 n/a 0 0 00:15:01.639 tests 18 18 18 0 0 00:15:01.639 asserts 360 360 360 0 n/a 00:15:01.639 00:15:01.639 Elapsed time = 1.511 seconds 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 411860 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 411860 ']' 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 411860 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 411860 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 411860' 00:15:01.639 killing process with pid 411860 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 411860 00:15:01.639 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 411860 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:01.898 00:15:01.898 real 0m5.675s 00:15:01.898 user 0m15.876s 00:15:01.898 sys 0m0.533s 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:01.898 ************************************ 00:15:01.898 END TEST nvmf_vfio_user_nvme_compliance 00:15:01.898 ************************************ 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.898 ************************************ 00:15:01.898 START TEST nvmf_vfio_user_fuzz 00:15:01.898 ************************************ 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:01.898 * Looking for test storage... 00:15:01.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.898 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:02.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.158 --rc genhtml_branch_coverage=1 00:15:02.158 --rc genhtml_function_coverage=1 00:15:02.158 --rc genhtml_legend=1 00:15:02.158 --rc geninfo_all_blocks=1 00:15:02.158 --rc geninfo_unexecuted_blocks=1 00:15:02.158 00:15:02.158 ' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:02.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.158 --rc genhtml_branch_coverage=1 00:15:02.158 --rc genhtml_function_coverage=1 00:15:02.158 --rc genhtml_legend=1 00:15:02.158 --rc geninfo_all_blocks=1 00:15:02.158 --rc geninfo_unexecuted_blocks=1 00:15:02.158 00:15:02.158 ' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:02.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.158 --rc genhtml_branch_coverage=1 00:15:02.158 --rc genhtml_function_coverage=1 00:15:02.158 --rc genhtml_legend=1 00:15:02.158 --rc geninfo_all_blocks=1 00:15:02.158 --rc geninfo_unexecuted_blocks=1 00:15:02.158 00:15:02.158 ' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:02.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.158 --rc genhtml_branch_coverage=1 00:15:02.158 --rc genhtml_function_coverage=1 00:15:02.158 --rc genhtml_legend=1 00:15:02.158 --rc geninfo_all_blocks=1 00:15:02.158 --rc geninfo_unexecuted_blocks=1 00:15:02.158 00:15:02.158 ' 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.158 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=412905 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 412905' 00:15:02.159 Process pid: 412905 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 412905 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 412905 ']' 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.159 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:02.417 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.417 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:02.417 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:03.355 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.356 malloc0 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:03.356 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:35.439 Fuzzing completed. Shutting down the fuzz application 00:15:35.439 00:15:35.439 Dumping successful admin opcodes: 00:15:35.439 8, 9, 10, 24, 00:15:35.439 Dumping successful io opcodes: 00:15:35.439 0, 00:15:35.439 NS: 0x20000081ef00 I/O qp, Total commands completed: 975916, total successful commands: 3822, random_seed: 3875632448 00:15:35.439 NS: 0x20000081ef00 admin qp, Total commands completed: 238305, total successful commands: 1911, random_seed: 3144550144 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 412905 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 412905 ']' 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 412905 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 412905 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.439 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 412905' 00:15:35.439 killing process with pid 412905 00:15:35.440 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 412905 00:15:35.440 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 412905 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:35.440 00:15:35.440 real 0m32.206s 00:15:35.440 user 0m29.392s 00:15:35.440 sys 0m31.694s 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.440 ************************************ 00:15:35.440 END TEST nvmf_vfio_user_fuzz 00:15:35.440 ************************************ 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.440 ************************************ 00:15:35.440 START TEST nvmf_auth_target 00:15:35.440 ************************************ 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:35.440 * Looking for test storage... 00:15:35.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:35.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.440 --rc genhtml_branch_coverage=1 00:15:35.440 --rc genhtml_function_coverage=1 00:15:35.440 --rc genhtml_legend=1 00:15:35.440 --rc geninfo_all_blocks=1 00:15:35.440 --rc geninfo_unexecuted_blocks=1 00:15:35.440 00:15:35.440 ' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:35.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.440 --rc genhtml_branch_coverage=1 00:15:35.440 --rc genhtml_function_coverage=1 00:15:35.440 --rc genhtml_legend=1 00:15:35.440 --rc geninfo_all_blocks=1 00:15:35.440 --rc geninfo_unexecuted_blocks=1 00:15:35.440 00:15:35.440 ' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:35.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.440 --rc genhtml_branch_coverage=1 00:15:35.440 --rc genhtml_function_coverage=1 00:15:35.440 --rc genhtml_legend=1 00:15:35.440 --rc geninfo_all_blocks=1 00:15:35.440 --rc geninfo_unexecuted_blocks=1 00:15:35.440 00:15:35.440 ' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:35.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.440 --rc genhtml_branch_coverage=1 00:15:35.440 --rc genhtml_function_coverage=1 00:15:35.440 --rc genhtml_legend=1 00:15:35.440 --rc geninfo_all_blocks=1 00:15:35.440 --rc geninfo_unexecuted_blocks=1 00:15:35.440 00:15:35.440 ' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.440 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:35.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:35.441 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:40.715 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:40.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:40.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:40.716 Found net devices under 0000:86:00.0: cvl_0_0 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:40.716 Found net devices under 0000:86:00.1: cvl_0_1 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:40.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:15:40.716 00:15:40.716 --- 10.0.0.2 ping statistics --- 00:15:40.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.716 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:15:40.716 00:15:40.716 --- 10.0.0.1 ping statistics --- 00:15:40.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.716 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=421360 00:15:40.716 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 421360 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 421360 ']' 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=421382 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7cad76342bfbf52069541f02342c4232bfc6277f6177bd25 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OTn 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7cad76342bfbf52069541f02342c4232bfc6277f6177bd25 0 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7cad76342bfbf52069541f02342c4232bfc6277f6177bd25 0 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7cad76342bfbf52069541f02342c4232bfc6277f6177bd25 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OTn 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OTn 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.OTn 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=486b65b905847528d66a6f8805bf87acb38c65860de891c04cfdc6881f6a810b 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.92O 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 486b65b905847528d66a6f8805bf87acb38c65860de891c04cfdc6881f6a810b 3 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 486b65b905847528d66a6f8805bf87acb38c65860de891c04cfdc6881f6a810b 3 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=486b65b905847528d66a6f8805bf87acb38c65860de891c04cfdc6881f6a810b 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.92O 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.92O 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.92O 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a64c97b43ffe9dff7123a4b9eaf5fa6a 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vp6 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a64c97b43ffe9dff7123a4b9eaf5fa6a 1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a64c97b43ffe9dff7123a4b9eaf5fa6a 1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a64c97b43ffe9dff7123a4b9eaf5fa6a 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vp6 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vp6 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vp6 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=886aa9e124fda417ea336425d0fc63106164ee0c2866860e 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k1p 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 886aa9e124fda417ea336425d0fc63106164ee0c2866860e 2 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 886aa9e124fda417ea336425d0fc63106164ee0c2866860e 2 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=886aa9e124fda417ea336425d0fc63106164ee0c2866860e 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:40.717 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k1p 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k1p 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.k1p 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=617c374e7041cdc545c872a6e0d2ceafd4bd0f958454cb0b 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Mbj 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 617c374e7041cdc545c872a6e0d2ceafd4bd0f958454cb0b 2 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 617c374e7041cdc545c872a6e0d2ceafd4bd0f958454cb0b 2 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.977 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=617c374e7041cdc545c872a6e0d2ceafd4bd0f958454cb0b 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Mbj 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Mbj 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Mbj 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a922333e8a46a0aaf7e116fbb8f683d8 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ivd 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a922333e8a46a0aaf7e116fbb8f683d8 1 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a922333e8a46a0aaf7e116fbb8f683d8 1 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a922333e8a46a0aaf7e116fbb8f683d8 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ivd 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ivd 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ivd 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c113e9643ec03a119719909d180f51d39f365021d30313c4a957c92cf605e13e 00:15:40.978 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KqB 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c113e9643ec03a119719909d180f51d39f365021d30313c4a957c92cf605e13e 3 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c113e9643ec03a119719909d180f51d39f365021d30313c4a957c92cf605e13e 3 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c113e9643ec03a119719909d180f51d39f365021d30313c4a957c92cf605e13e 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KqB 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KqB 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.KqB 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 421360 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 421360 ']' 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.978 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 421382 /var/tmp/host.sock 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 421382 ']' 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.237 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OTn 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.OTn 00:15:41.496 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.OTn 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.92O ]] 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.92O 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.92O 00:15:41.756 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.92O 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vp6 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vp6 00:15:42.016 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vp6 00:15:42.016 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.k1p ]] 00:15:42.016 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k1p 00:15:42.016 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.016 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k1p 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k1p 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Mbj 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Mbj 00:15:42.275 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Mbj 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ivd ]] 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ivd 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ivd 00:15:42.534 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ivd 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KqB 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KqB 00:15:42.792 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KqB 00:15:43.052 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:43.052 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:43.052 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.052 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.052 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:43.052 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.052 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.311 00:15:43.311 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.311 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.311 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.571 { 00:15:43.571 "cntlid": 1, 00:15:43.571 "qid": 0, 00:15:43.571 "state": "enabled", 00:15:43.571 "thread": "nvmf_tgt_poll_group_000", 00:15:43.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.571 "listen_address": { 00:15:43.571 "trtype": "TCP", 00:15:43.571 "adrfam": "IPv4", 00:15:43.571 "traddr": "10.0.0.2", 00:15:43.571 "trsvcid": "4420" 00:15:43.571 }, 00:15:43.571 "peer_address": { 00:15:43.571 "trtype": "TCP", 00:15:43.571 "adrfam": "IPv4", 00:15:43.571 "traddr": "10.0.0.1", 00:15:43.571 "trsvcid": "53006" 00:15:43.571 }, 00:15:43.571 "auth": { 00:15:43.571 "state": "completed", 00:15:43.571 "digest": "sha256", 00:15:43.571 "dhgroup": "null" 00:15:43.571 } 00:15:43.571 } 00:15:43.571 ]' 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.571 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.830 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.830 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.830 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.830 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:15:43.830 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.400 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.658 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.917 00:15:44.917 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.917 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.917 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.177 { 00:15:45.177 "cntlid": 3, 00:15:45.177 "qid": 0, 00:15:45.177 "state": "enabled", 00:15:45.177 "thread": "nvmf_tgt_poll_group_000", 00:15:45.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.177 "listen_address": { 00:15:45.177 "trtype": "TCP", 00:15:45.177 "adrfam": "IPv4", 00:15:45.177 "traddr": "10.0.0.2", 00:15:45.177 "trsvcid": "4420" 00:15:45.177 }, 00:15:45.177 "peer_address": { 00:15:45.177 "trtype": "TCP", 00:15:45.177 "adrfam": "IPv4", 00:15:45.177 "traddr": "10.0.0.1", 00:15:45.177 "trsvcid": "53020" 00:15:45.177 }, 00:15:45.177 "auth": { 00:15:45.177 "state": "completed", 00:15:45.177 "digest": "sha256", 00:15:45.177 "dhgroup": "null" 00:15:45.177 } 00:15:45.177 } 00:15:45.177 ]' 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.177 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.436 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.436 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.436 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.436 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:15:45.436 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.003 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.263 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.522 00:15:46.522 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.522 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.522 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.781 { 00:15:46.781 "cntlid": 5, 00:15:46.781 "qid": 0, 00:15:46.781 "state": "enabled", 00:15:46.781 "thread": "nvmf_tgt_poll_group_000", 00:15:46.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.781 "listen_address": { 00:15:46.781 "trtype": "TCP", 00:15:46.781 "adrfam": "IPv4", 00:15:46.781 "traddr": "10.0.0.2", 00:15:46.781 "trsvcid": "4420" 00:15:46.781 }, 00:15:46.781 "peer_address": { 00:15:46.781 "trtype": "TCP", 00:15:46.781 "adrfam": "IPv4", 00:15:46.781 "traddr": "10.0.0.1", 00:15:46.781 "trsvcid": "53056" 00:15:46.781 }, 00:15:46.781 "auth": { 00:15:46.781 "state": "completed", 00:15:46.781 "digest": "sha256", 00:15:46.781 "dhgroup": "null" 00:15:46.781 } 00:15:46.781 } 00:15:46.781 ]' 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.781 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.040 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:15:47.040 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.607 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.867 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.126 00:15:48.126 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.126 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.126 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.386 { 00:15:48.386 "cntlid": 7, 00:15:48.386 "qid": 0, 00:15:48.386 "state": "enabled", 00:15:48.386 "thread": "nvmf_tgt_poll_group_000", 00:15:48.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.386 "listen_address": { 00:15:48.386 "trtype": "TCP", 00:15:48.386 "adrfam": "IPv4", 00:15:48.386 "traddr": "10.0.0.2", 00:15:48.386 "trsvcid": "4420" 00:15:48.386 }, 00:15:48.386 "peer_address": { 00:15:48.386 "trtype": "TCP", 00:15:48.386 "adrfam": "IPv4", 00:15:48.386 "traddr": "10.0.0.1", 00:15:48.386 "trsvcid": "53068" 00:15:48.386 }, 00:15:48.386 "auth": { 00:15:48.386 "state": "completed", 00:15:48.386 "digest": "sha256", 00:15:48.386 "dhgroup": "null" 00:15:48.386 } 00:15:48.386 } 00:15:48.386 ]' 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.386 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.644 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:15:48.644 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.213 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.473 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.760 00:15:49.760 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.760 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.760 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.020 { 00:15:50.020 "cntlid": 9, 00:15:50.020 "qid": 0, 00:15:50.020 "state": "enabled", 00:15:50.020 "thread": "nvmf_tgt_poll_group_000", 00:15:50.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.020 "listen_address": { 00:15:50.020 "trtype": "TCP", 00:15:50.020 "adrfam": "IPv4", 00:15:50.020 "traddr": "10.0.0.2", 00:15:50.020 "trsvcid": "4420" 00:15:50.020 }, 00:15:50.020 "peer_address": { 00:15:50.020 "trtype": "TCP", 00:15:50.020 "adrfam": "IPv4", 00:15:50.020 "traddr": "10.0.0.1", 00:15:50.020 "trsvcid": "53092" 00:15:50.020 }, 00:15:50.020 "auth": { 00:15:50.020 "state": "completed", 00:15:50.020 "digest": "sha256", 00:15:50.020 "dhgroup": "ffdhe2048" 00:15:50.020 } 00:15:50.020 } 00:15:50.020 ]' 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.020 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.020 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.020 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.020 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.020 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.020 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.279 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:15:50.279 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.848 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.108 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.109 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.368 00:15:51.368 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.368 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.368 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.627 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.627 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.627 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.627 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.627 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.628 { 00:15:51.628 "cntlid": 11, 00:15:51.628 "qid": 0, 00:15:51.628 "state": "enabled", 00:15:51.628 "thread": "nvmf_tgt_poll_group_000", 00:15:51.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.628 "listen_address": { 00:15:51.628 "trtype": "TCP", 00:15:51.628 "adrfam": "IPv4", 00:15:51.628 "traddr": "10.0.0.2", 00:15:51.628 "trsvcid": "4420" 00:15:51.628 }, 00:15:51.628 "peer_address": { 00:15:51.628 "trtype": "TCP", 00:15:51.628 "adrfam": "IPv4", 00:15:51.628 "traddr": "10.0.0.1", 00:15:51.628 "trsvcid": "49088" 00:15:51.628 }, 00:15:51.628 "auth": { 00:15:51.628 "state": "completed", 00:15:51.628 "digest": "sha256", 00:15:51.628 "dhgroup": "ffdhe2048" 00:15:51.628 } 00:15:51.628 } 00:15:51.628 ]' 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.628 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.887 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:15:51.887 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.456 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.715 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.976 00:15:52.976 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.976 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.976 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.976 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.976 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.976 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.976 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.291 { 00:15:53.291 "cntlid": 13, 00:15:53.291 "qid": 0, 00:15:53.291 "state": "enabled", 00:15:53.291 "thread": "nvmf_tgt_poll_group_000", 00:15:53.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.291 "listen_address": { 00:15:53.291 "trtype": "TCP", 00:15:53.291 "adrfam": "IPv4", 00:15:53.291 "traddr": "10.0.0.2", 00:15:53.291 "trsvcid": "4420" 00:15:53.291 }, 00:15:53.291 "peer_address": { 00:15:53.291 "trtype": "TCP", 00:15:53.291 "adrfam": "IPv4", 00:15:53.291 "traddr": "10.0.0.1", 00:15:53.291 "trsvcid": "49118" 00:15:53.291 }, 00:15:53.291 "auth": { 00:15:53.291 "state": "completed", 00:15:53.291 "digest": "sha256", 00:15:53.291 "dhgroup": "ffdhe2048" 00:15:53.291 } 00:15:53.291 } 00:15:53.291 ]' 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.291 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.612 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:15:53.612 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:15:53.894 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.894 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.894 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.894 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.153 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.154 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.154 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.154 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.412 00:15:54.412 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.412 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.412 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.672 { 00:15:54.672 "cntlid": 15, 00:15:54.672 "qid": 0, 00:15:54.672 "state": "enabled", 00:15:54.672 "thread": "nvmf_tgt_poll_group_000", 00:15:54.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.672 "listen_address": { 00:15:54.672 "trtype": "TCP", 00:15:54.672 "adrfam": "IPv4", 00:15:54.672 "traddr": "10.0.0.2", 00:15:54.672 "trsvcid": "4420" 00:15:54.672 }, 00:15:54.672 "peer_address": { 00:15:54.672 "trtype": "TCP", 00:15:54.672 "adrfam": "IPv4", 00:15:54.672 "traddr": "10.0.0.1", 00:15:54.672 "trsvcid": "49160" 00:15:54.672 }, 00:15:54.672 "auth": { 00:15:54.672 "state": "completed", 00:15:54.672 "digest": "sha256", 00:15:54.672 "dhgroup": "ffdhe2048" 00:15:54.672 } 00:15:54.672 } 00:15:54.672 ]' 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.672 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.932 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.932 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.932 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.932 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:15:54.932 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.500 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.759 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.018 00:15:56.018 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.018 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.018 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.278 { 00:15:56.278 "cntlid": 17, 00:15:56.278 "qid": 0, 00:15:56.278 "state": "enabled", 00:15:56.278 "thread": "nvmf_tgt_poll_group_000", 00:15:56.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.278 "listen_address": { 00:15:56.278 "trtype": "TCP", 00:15:56.278 "adrfam": "IPv4", 00:15:56.278 "traddr": "10.0.0.2", 00:15:56.278 "trsvcid": "4420" 00:15:56.278 }, 00:15:56.278 "peer_address": { 00:15:56.278 "trtype": "TCP", 00:15:56.278 "adrfam": "IPv4", 00:15:56.278 "traddr": "10.0.0.1", 00:15:56.278 "trsvcid": "49184" 00:15:56.278 }, 00:15:56.278 "auth": { 00:15:56.278 "state": "completed", 00:15:56.278 "digest": "sha256", 00:15:56.278 "dhgroup": "ffdhe3072" 00:15:56.278 } 00:15:56.278 } 00:15:56.278 ]' 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.278 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.538 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.538 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.538 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.538 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:15:56.538 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:57.106 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.365 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.624 00:15:57.624 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.624 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.624 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.882 { 00:15:57.882 "cntlid": 19, 00:15:57.882 "qid": 0, 00:15:57.882 "state": "enabled", 00:15:57.882 "thread": "nvmf_tgt_poll_group_000", 00:15:57.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.882 "listen_address": { 00:15:57.882 "trtype": "TCP", 00:15:57.882 "adrfam": "IPv4", 00:15:57.882 "traddr": "10.0.0.2", 00:15:57.882 "trsvcid": "4420" 00:15:57.882 }, 00:15:57.882 "peer_address": { 00:15:57.882 "trtype": "TCP", 00:15:57.882 "adrfam": "IPv4", 00:15:57.882 "traddr": "10.0.0.1", 00:15:57.882 "trsvcid": "49218" 00:15:57.882 }, 00:15:57.882 "auth": { 00:15:57.882 "state": "completed", 00:15:57.882 "digest": "sha256", 00:15:57.882 "dhgroup": "ffdhe3072" 00:15:57.882 } 00:15:57.882 } 00:15:57.882 ]' 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.882 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.141 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.141 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.141 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.141 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:15:58.141 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:15:58.709 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.709 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.709 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.709 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.968 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.968 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.968 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.968 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.968 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.227 00:15:59.227 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.227 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.227 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.486 { 00:15:59.486 "cntlid": 21, 00:15:59.486 "qid": 0, 00:15:59.486 "state": "enabled", 00:15:59.486 "thread": "nvmf_tgt_poll_group_000", 00:15:59.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.486 "listen_address": { 00:15:59.486 "trtype": "TCP", 00:15:59.486 "adrfam": "IPv4", 00:15:59.486 "traddr": "10.0.0.2", 00:15:59.486 "trsvcid": "4420" 00:15:59.486 }, 00:15:59.486 "peer_address": { 00:15:59.486 "trtype": "TCP", 00:15:59.486 "adrfam": "IPv4", 00:15:59.486 "traddr": "10.0.0.1", 00:15:59.486 "trsvcid": "49262" 00:15:59.486 }, 00:15:59.486 "auth": { 00:15:59.486 "state": "completed", 00:15:59.486 "digest": "sha256", 00:15:59.486 "dhgroup": "ffdhe3072" 00:15:59.486 } 00:15:59.486 } 00:15:59.486 ]' 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.486 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.746 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.746 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.746 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.746 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:15:59.746 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.315 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.574 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.833 00:16:00.833 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.833 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.833 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.092 { 00:16:01.092 "cntlid": 23, 00:16:01.092 "qid": 0, 00:16:01.092 "state": "enabled", 00:16:01.092 "thread": "nvmf_tgt_poll_group_000", 00:16:01.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.092 "listen_address": { 00:16:01.092 "trtype": "TCP", 00:16:01.092 "adrfam": "IPv4", 00:16:01.092 "traddr": "10.0.0.2", 00:16:01.092 "trsvcid": "4420" 00:16:01.092 }, 00:16:01.092 "peer_address": { 00:16:01.092 "trtype": "TCP", 00:16:01.092 "adrfam": "IPv4", 00:16:01.092 "traddr": "10.0.0.1", 00:16:01.092 "trsvcid": "35250" 00:16:01.092 }, 00:16:01.092 "auth": { 00:16:01.092 "state": "completed", 00:16:01.092 "digest": "sha256", 00:16:01.092 "dhgroup": "ffdhe3072" 00:16:01.092 } 00:16:01.092 } 00:16:01.092 ]' 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.092 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.351 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.351 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.351 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.351 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:01.351 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:01.920 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.920 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.179 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.438 00:16:02.438 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.438 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.438 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.696 { 00:16:02.696 "cntlid": 25, 00:16:02.696 "qid": 0, 00:16:02.696 "state": "enabled", 00:16:02.696 "thread": "nvmf_tgt_poll_group_000", 00:16:02.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.696 "listen_address": { 00:16:02.696 "trtype": "TCP", 00:16:02.696 "adrfam": "IPv4", 00:16:02.696 "traddr": "10.0.0.2", 00:16:02.696 "trsvcid": "4420" 00:16:02.696 }, 00:16:02.696 "peer_address": { 00:16:02.696 "trtype": "TCP", 00:16:02.696 "adrfam": "IPv4", 00:16:02.696 "traddr": "10.0.0.1", 00:16:02.696 "trsvcid": "35264" 00:16:02.696 }, 00:16:02.696 "auth": { 00:16:02.696 "state": "completed", 00:16:02.696 "digest": "sha256", 00:16:02.696 "dhgroup": "ffdhe4096" 00:16:02.696 } 00:16:02.696 } 00:16:02.696 ]' 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.696 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.955 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.955 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.955 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.955 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.955 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.214 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:03.214 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.782 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.783 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.042 00:16:04.042 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.042 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.042 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.300 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.301 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.301 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.301 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.301 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.301 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.301 { 00:16:04.301 "cntlid": 27, 00:16:04.301 "qid": 0, 00:16:04.301 "state": "enabled", 00:16:04.301 "thread": "nvmf_tgt_poll_group_000", 00:16:04.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.301 "listen_address": { 00:16:04.301 "trtype": "TCP", 00:16:04.301 "adrfam": "IPv4", 00:16:04.301 "traddr": "10.0.0.2", 00:16:04.301 "trsvcid": "4420" 00:16:04.301 }, 00:16:04.301 "peer_address": { 00:16:04.301 "trtype": "TCP", 00:16:04.301 "adrfam": "IPv4", 00:16:04.301 "traddr": "10.0.0.1", 00:16:04.301 "trsvcid": "35290" 00:16:04.301 }, 00:16:04.301 "auth": { 00:16:04.301 "state": "completed", 00:16:04.301 "digest": "sha256", 00:16:04.301 "dhgroup": "ffdhe4096" 00:16:04.301 } 00:16:04.301 } 00:16:04.301 ]' 00:16:04.301 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.560 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.819 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:04.819 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.387 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.388 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.388 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.388 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.388 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.647 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.906 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.906 { 00:16:05.906 "cntlid": 29, 00:16:05.906 "qid": 0, 00:16:05.906 "state": "enabled", 00:16:05.906 "thread": "nvmf_tgt_poll_group_000", 00:16:05.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.906 "listen_address": { 00:16:05.906 "trtype": "TCP", 00:16:05.906 "adrfam": "IPv4", 00:16:05.906 "traddr": "10.0.0.2", 00:16:05.906 "trsvcid": "4420" 00:16:05.906 }, 00:16:05.906 "peer_address": { 00:16:05.906 "trtype": "TCP", 00:16:05.906 "adrfam": "IPv4", 00:16:05.906 "traddr": "10.0.0.1", 00:16:05.906 "trsvcid": "35322" 00:16:05.907 }, 00:16:05.907 "auth": { 00:16:05.907 "state": "completed", 00:16:05.907 "digest": "sha256", 00:16:05.907 "dhgroup": "ffdhe4096" 00:16:05.907 } 00:16:05.907 } 00:16:05.907 ]' 00:16:05.907 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.166 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.426 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:06.426 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.993 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.994 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.252 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.252 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.252 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.252 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.511 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.511 { 00:16:07.511 "cntlid": 31, 00:16:07.511 "qid": 0, 00:16:07.511 "state": "enabled", 00:16:07.511 "thread": "nvmf_tgt_poll_group_000", 00:16:07.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.511 "listen_address": { 00:16:07.511 "trtype": "TCP", 00:16:07.511 "adrfam": "IPv4", 00:16:07.511 "traddr": "10.0.0.2", 00:16:07.511 "trsvcid": "4420" 00:16:07.511 }, 00:16:07.511 "peer_address": { 00:16:07.511 "trtype": "TCP", 00:16:07.511 "adrfam": "IPv4", 00:16:07.511 "traddr": "10.0.0.1", 00:16:07.511 "trsvcid": "35358" 00:16:07.511 }, 00:16:07.511 "auth": { 00:16:07.511 "state": "completed", 00:16:07.511 "digest": "sha256", 00:16:07.511 "dhgroup": "ffdhe4096" 00:16:07.511 } 00:16:07.511 } 00:16:07.511 ]' 00:16:07.511 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.771 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.030 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:08.030 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.598 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.857 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.857 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.857 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.857 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.116 00:16:09.116 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.116 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.116 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.376 { 00:16:09.376 "cntlid": 33, 00:16:09.376 "qid": 0, 00:16:09.376 "state": "enabled", 00:16:09.376 "thread": "nvmf_tgt_poll_group_000", 00:16:09.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.376 "listen_address": { 00:16:09.376 "trtype": "TCP", 00:16:09.376 "adrfam": "IPv4", 00:16:09.376 "traddr": "10.0.0.2", 00:16:09.376 "trsvcid": "4420" 00:16:09.376 }, 00:16:09.376 "peer_address": { 00:16:09.376 "trtype": "TCP", 00:16:09.376 "adrfam": "IPv4", 00:16:09.376 "traddr": "10.0.0.1", 00:16:09.376 "trsvcid": "35390" 00:16:09.376 }, 00:16:09.376 "auth": { 00:16:09.376 "state": "completed", 00:16:09.376 "digest": "sha256", 00:16:09.376 "dhgroup": "ffdhe6144" 00:16:09.376 } 00:16:09.376 } 00:16:09.376 ]' 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.376 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.636 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:09.636 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.204 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.463 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.723 00:16:10.723 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.723 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.723 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.982 { 00:16:10.982 "cntlid": 35, 00:16:10.982 "qid": 0, 00:16:10.982 "state": "enabled", 00:16:10.982 "thread": "nvmf_tgt_poll_group_000", 00:16:10.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.982 "listen_address": { 00:16:10.982 "trtype": "TCP", 00:16:10.982 "adrfam": "IPv4", 00:16:10.982 "traddr": "10.0.0.2", 00:16:10.982 "trsvcid": "4420" 00:16:10.982 }, 00:16:10.982 "peer_address": { 00:16:10.982 "trtype": "TCP", 00:16:10.982 "adrfam": "IPv4", 00:16:10.982 "traddr": "10.0.0.1", 00:16:10.982 "trsvcid": "35424" 00:16:10.982 }, 00:16:10.982 "auth": { 00:16:10.982 "state": "completed", 00:16:10.982 "digest": "sha256", 00:16:10.982 "dhgroup": "ffdhe6144" 00:16:10.982 } 00:16:10.982 } 00:16:10.982 ]' 00:16:10.982 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.982 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.982 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.982 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.982 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.241 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.241 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.241 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.241 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:11.241 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.810 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.069 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.328 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.588 { 00:16:12.588 "cntlid": 37, 00:16:12.588 "qid": 0, 00:16:12.588 "state": "enabled", 00:16:12.588 "thread": "nvmf_tgt_poll_group_000", 00:16:12.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.588 "listen_address": { 00:16:12.588 "trtype": "TCP", 00:16:12.588 "adrfam": "IPv4", 00:16:12.588 "traddr": "10.0.0.2", 00:16:12.588 "trsvcid": "4420" 00:16:12.588 }, 00:16:12.588 "peer_address": { 00:16:12.588 "trtype": "TCP", 00:16:12.588 "adrfam": "IPv4", 00:16:12.588 "traddr": "10.0.0.1", 00:16:12.588 "trsvcid": "48952" 00:16:12.588 }, 00:16:12.588 "auth": { 00:16:12.588 "state": "completed", 00:16:12.588 "digest": "sha256", 00:16:12.588 "dhgroup": "ffdhe6144" 00:16:12.588 } 00:16:12.588 } 00:16:12.588 ]' 00:16:12.588 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.847 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.107 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:13.107 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.675 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.934 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.193 00:16:14.193 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.193 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.193 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.452 { 00:16:14.452 "cntlid": 39, 00:16:14.452 "qid": 0, 00:16:14.452 "state": "enabled", 00:16:14.452 "thread": "nvmf_tgt_poll_group_000", 00:16:14.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.452 "listen_address": { 00:16:14.452 "trtype": "TCP", 00:16:14.452 "adrfam": "IPv4", 00:16:14.452 "traddr": "10.0.0.2", 00:16:14.452 "trsvcid": "4420" 00:16:14.452 }, 00:16:14.452 "peer_address": { 00:16:14.452 "trtype": "TCP", 00:16:14.452 "adrfam": "IPv4", 00:16:14.452 "traddr": "10.0.0.1", 00:16:14.452 "trsvcid": "48978" 00:16:14.452 }, 00:16:14.452 "auth": { 00:16:14.452 "state": "completed", 00:16:14.452 "digest": "sha256", 00:16:14.452 "dhgroup": "ffdhe6144" 00:16:14.452 } 00:16:14.452 } 00:16:14.452 ]' 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.452 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.710 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:14.710 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.277 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.535 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.103 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.103 { 00:16:16.103 "cntlid": 41, 00:16:16.103 "qid": 0, 00:16:16.103 "state": "enabled", 00:16:16.103 "thread": "nvmf_tgt_poll_group_000", 00:16:16.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.103 "listen_address": { 00:16:16.103 "trtype": "TCP", 00:16:16.103 "adrfam": "IPv4", 00:16:16.103 "traddr": "10.0.0.2", 00:16:16.103 "trsvcid": "4420" 00:16:16.103 }, 00:16:16.103 "peer_address": { 00:16:16.103 "trtype": "TCP", 00:16:16.103 "adrfam": "IPv4", 00:16:16.103 "traddr": "10.0.0.1", 00:16:16.103 "trsvcid": "48994" 00:16:16.103 }, 00:16:16.103 "auth": { 00:16:16.103 "state": "completed", 00:16:16.103 "digest": "sha256", 00:16:16.103 "dhgroup": "ffdhe8192" 00:16:16.103 } 00:16:16.103 } 00:16:16.103 ]' 00:16:16.103 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.372 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.631 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:16.631 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.199 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.458 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.718 00:16:17.977 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.977 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.977 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.977 { 00:16:17.977 "cntlid": 43, 00:16:17.977 "qid": 0, 00:16:17.977 "state": "enabled", 00:16:17.977 "thread": "nvmf_tgt_poll_group_000", 00:16:17.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.977 "listen_address": { 00:16:17.977 "trtype": "TCP", 00:16:17.977 "adrfam": "IPv4", 00:16:17.977 "traddr": "10.0.0.2", 00:16:17.977 "trsvcid": "4420" 00:16:17.977 }, 00:16:17.977 "peer_address": { 00:16:17.977 "trtype": "TCP", 00:16:17.977 "adrfam": "IPv4", 00:16:17.977 "traddr": "10.0.0.1", 00:16:17.977 "trsvcid": "49022" 00:16:17.977 }, 00:16:17.977 "auth": { 00:16:17.977 "state": "completed", 00:16:17.977 "digest": "sha256", 00:16:17.977 "dhgroup": "ffdhe8192" 00:16:17.977 } 00:16:17.977 } 00:16:17.977 ]' 00:16:17.977 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.237 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.496 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:18.496 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.065 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.065 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.632 00:16:19.632 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.632 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.632 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.891 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.892 { 00:16:19.892 "cntlid": 45, 00:16:19.892 "qid": 0, 00:16:19.892 "state": "enabled", 00:16:19.892 "thread": "nvmf_tgt_poll_group_000", 00:16:19.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.892 "listen_address": { 00:16:19.892 "trtype": "TCP", 00:16:19.892 "adrfam": "IPv4", 00:16:19.892 "traddr": "10.0.0.2", 00:16:19.892 "trsvcid": "4420" 00:16:19.892 }, 00:16:19.892 "peer_address": { 00:16:19.892 "trtype": "TCP", 00:16:19.892 "adrfam": "IPv4", 00:16:19.892 "traddr": "10.0.0.1", 00:16:19.892 "trsvcid": "49050" 00:16:19.892 }, 00:16:19.892 "auth": { 00:16:19.892 "state": "completed", 00:16:19.892 "digest": "sha256", 00:16:19.892 "dhgroup": "ffdhe8192" 00:16:19.892 } 00:16:19.892 } 00:16:19.892 ]' 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.892 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.151 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.151 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.151 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.151 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:20.151 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:20.720 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.720 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.720 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.720 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.721 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.721 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.980 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.548 00:16:21.548 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.548 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.548 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.808 { 00:16:21.808 "cntlid": 47, 00:16:21.808 "qid": 0, 00:16:21.808 "state": "enabled", 00:16:21.808 "thread": "nvmf_tgt_poll_group_000", 00:16:21.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.808 "listen_address": { 00:16:21.808 "trtype": "TCP", 00:16:21.808 "adrfam": "IPv4", 00:16:21.808 "traddr": "10.0.0.2", 00:16:21.808 "trsvcid": "4420" 00:16:21.808 }, 00:16:21.808 "peer_address": { 00:16:21.808 "trtype": "TCP", 00:16:21.808 "adrfam": "IPv4", 00:16:21.808 "traddr": "10.0.0.1", 00:16:21.808 "trsvcid": "59912" 00:16:21.808 }, 00:16:21.808 "auth": { 00:16:21.808 "state": "completed", 00:16:21.808 "digest": "sha256", 00:16:21.808 "dhgroup": "ffdhe8192" 00:16:21.808 } 00:16:21.808 } 00:16:21.808 ]' 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.808 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.067 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:22.067 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.635 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.894 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.153 00:16:23.153 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.153 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.153 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.413 { 00:16:23.413 "cntlid": 49, 00:16:23.413 "qid": 0, 00:16:23.413 "state": "enabled", 00:16:23.413 "thread": "nvmf_tgt_poll_group_000", 00:16:23.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.413 "listen_address": { 00:16:23.413 "trtype": "TCP", 00:16:23.413 "adrfam": "IPv4", 00:16:23.413 "traddr": "10.0.0.2", 00:16:23.413 "trsvcid": "4420" 00:16:23.413 }, 00:16:23.413 "peer_address": { 00:16:23.413 "trtype": "TCP", 00:16:23.413 "adrfam": "IPv4", 00:16:23.413 "traddr": "10.0.0.1", 00:16:23.413 "trsvcid": "59932" 00:16:23.413 }, 00:16:23.413 "auth": { 00:16:23.413 "state": "completed", 00:16:23.413 "digest": "sha384", 00:16:23.413 "dhgroup": "null" 00:16:23.413 } 00:16:23.413 } 00:16:23.413 ]' 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.413 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.672 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:23.672 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.246 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.504 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:24.504 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.504 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.504 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.505 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.765 00:16:24.765 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.765 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.765 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.024 { 00:16:25.024 "cntlid": 51, 00:16:25.024 "qid": 0, 00:16:25.024 "state": "enabled", 00:16:25.024 "thread": "nvmf_tgt_poll_group_000", 00:16:25.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.024 "listen_address": { 00:16:25.024 "trtype": "TCP", 00:16:25.024 "adrfam": "IPv4", 00:16:25.024 "traddr": "10.0.0.2", 00:16:25.024 "trsvcid": "4420" 00:16:25.024 }, 00:16:25.024 "peer_address": { 00:16:25.024 "trtype": "TCP", 00:16:25.024 "adrfam": "IPv4", 00:16:25.024 "traddr": "10.0.0.1", 00:16:25.024 "trsvcid": "59944" 00:16:25.024 }, 00:16:25.024 "auth": { 00:16:25.024 "state": "completed", 00:16:25.024 "digest": "sha384", 00:16:25.024 "dhgroup": "null" 00:16:25.024 } 00:16:25.024 } 00:16:25.024 ]' 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.024 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.024 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.024 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.024 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.283 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:25.283 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.852 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.112 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.370 00:16:26.370 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.370 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.370 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.630 { 00:16:26.630 "cntlid": 53, 00:16:26.630 "qid": 0, 00:16:26.630 "state": "enabled", 00:16:26.630 "thread": "nvmf_tgt_poll_group_000", 00:16:26.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.630 "listen_address": { 00:16:26.630 "trtype": "TCP", 00:16:26.630 "adrfam": "IPv4", 00:16:26.630 "traddr": "10.0.0.2", 00:16:26.630 "trsvcid": "4420" 00:16:26.630 }, 00:16:26.630 "peer_address": { 00:16:26.630 "trtype": "TCP", 00:16:26.630 "adrfam": "IPv4", 00:16:26.630 "traddr": "10.0.0.1", 00:16:26.630 "trsvcid": "59972" 00:16:26.630 }, 00:16:26.630 "auth": { 00:16:26.630 "state": "completed", 00:16:26.630 "digest": "sha384", 00:16:26.630 "dhgroup": "null" 00:16:26.630 } 00:16:26.630 } 00:16:26.630 ]' 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.630 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.889 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:26.889 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.458 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.717 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.976 00:16:27.976 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.976 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.976 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.235 { 00:16:28.235 "cntlid": 55, 00:16:28.235 "qid": 0, 00:16:28.235 "state": "enabled", 00:16:28.235 "thread": "nvmf_tgt_poll_group_000", 00:16:28.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.235 "listen_address": { 00:16:28.235 "trtype": "TCP", 00:16:28.235 "adrfam": "IPv4", 00:16:28.235 "traddr": "10.0.0.2", 00:16:28.235 "trsvcid": "4420" 00:16:28.235 }, 00:16:28.235 "peer_address": { 00:16:28.235 "trtype": "TCP", 00:16:28.235 "adrfam": "IPv4", 00:16:28.235 "traddr": "10.0.0.1", 00:16:28.235 "trsvcid": "60006" 00:16:28.235 }, 00:16:28.235 "auth": { 00:16:28.235 "state": "completed", 00:16:28.235 "digest": "sha384", 00:16:28.235 "dhgroup": "null" 00:16:28.235 } 00:16:28.235 } 00:16:28.235 ]' 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.235 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.494 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:28.494 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.062 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.321 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:29.321 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.321 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.321 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.321 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.322 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.581 00:16:29.581 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.581 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.581 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.841 { 00:16:29.841 "cntlid": 57, 00:16:29.841 "qid": 0, 00:16:29.841 "state": "enabled", 00:16:29.841 "thread": "nvmf_tgt_poll_group_000", 00:16:29.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.841 "listen_address": { 00:16:29.841 "trtype": "TCP", 00:16:29.841 "adrfam": "IPv4", 00:16:29.841 "traddr": "10.0.0.2", 00:16:29.841 "trsvcid": "4420" 00:16:29.841 }, 00:16:29.841 "peer_address": { 00:16:29.841 "trtype": "TCP", 00:16:29.841 "adrfam": "IPv4", 00:16:29.841 "traddr": "10.0.0.1", 00:16:29.841 "trsvcid": "60020" 00:16:29.841 }, 00:16:29.841 "auth": { 00:16:29.841 "state": "completed", 00:16:29.841 "digest": "sha384", 00:16:29.841 "dhgroup": "ffdhe2048" 00:16:29.841 } 00:16:29.841 } 00:16:29.841 ]' 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.841 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.101 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:30.101 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.669 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.987 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.987 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.290 { 00:16:31.290 "cntlid": 59, 00:16:31.290 "qid": 0, 00:16:31.290 "state": "enabled", 00:16:31.290 "thread": "nvmf_tgt_poll_group_000", 00:16:31.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.290 "listen_address": { 00:16:31.290 "trtype": "TCP", 00:16:31.290 "adrfam": "IPv4", 00:16:31.290 "traddr": "10.0.0.2", 00:16:31.290 "trsvcid": "4420" 00:16:31.290 }, 00:16:31.290 "peer_address": { 00:16:31.290 "trtype": "TCP", 00:16:31.290 "adrfam": "IPv4", 00:16:31.290 "traddr": "10.0.0.1", 00:16:31.290 "trsvcid": "50850" 00:16:31.290 }, 00:16:31.290 "auth": { 00:16:31.290 "state": "completed", 00:16:31.290 "digest": "sha384", 00:16:31.290 "dhgroup": "ffdhe2048" 00:16:31.290 } 00:16:31.290 } 00:16:31.290 ]' 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.290 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.577 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.577 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.577 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.577 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:31.577 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.145 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.405 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.664 00:16:32.664 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.664 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.664 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.923 { 00:16:32.923 "cntlid": 61, 00:16:32.923 "qid": 0, 00:16:32.923 "state": "enabled", 00:16:32.923 "thread": "nvmf_tgt_poll_group_000", 00:16:32.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.923 "listen_address": { 00:16:32.923 "trtype": "TCP", 00:16:32.923 "adrfam": "IPv4", 00:16:32.923 "traddr": "10.0.0.2", 00:16:32.923 "trsvcid": "4420" 00:16:32.923 }, 00:16:32.923 "peer_address": { 00:16:32.923 "trtype": "TCP", 00:16:32.923 "adrfam": "IPv4", 00:16:32.923 "traddr": "10.0.0.1", 00:16:32.923 "trsvcid": "50874" 00:16:32.923 }, 00:16:32.923 "auth": { 00:16:32.923 "state": "completed", 00:16:32.923 "digest": "sha384", 00:16:32.923 "dhgroup": "ffdhe2048" 00:16:32.923 } 00:16:32.923 } 00:16:32.923 ]' 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.923 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.923 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.923 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.182 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.183 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.183 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.183 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:33.183 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:33.750 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.750 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.750 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.750 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.009 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.009 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.009 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.009 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.009 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.268 00:16:34.268 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.268 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.268 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.527 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.527 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.527 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.527 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.527 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.527 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.527 { 00:16:34.527 "cntlid": 63, 00:16:34.527 "qid": 0, 00:16:34.527 "state": "enabled", 00:16:34.527 "thread": "nvmf_tgt_poll_group_000", 00:16:34.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.528 "listen_address": { 00:16:34.528 "trtype": "TCP", 00:16:34.528 "adrfam": "IPv4", 00:16:34.528 "traddr": "10.0.0.2", 00:16:34.528 "trsvcid": "4420" 00:16:34.528 }, 00:16:34.528 "peer_address": { 00:16:34.528 "trtype": "TCP", 00:16:34.528 "adrfam": "IPv4", 00:16:34.528 "traddr": "10.0.0.1", 00:16:34.528 "trsvcid": "50902" 00:16:34.528 }, 00:16:34.528 "auth": { 00:16:34.528 "state": "completed", 00:16:34.528 "digest": "sha384", 00:16:34.528 "dhgroup": "ffdhe2048" 00:16:34.528 } 00:16:34.528 } 00:16:34.528 ]' 00:16:34.528 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.528 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.528 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:34.787 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:35.355 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.615 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.874 00:16:35.874 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.874 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.874 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.133 { 00:16:36.133 "cntlid": 65, 00:16:36.133 "qid": 0, 00:16:36.133 "state": "enabled", 00:16:36.133 "thread": "nvmf_tgt_poll_group_000", 00:16:36.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.133 "listen_address": { 00:16:36.133 "trtype": "TCP", 00:16:36.133 "adrfam": "IPv4", 00:16:36.133 "traddr": "10.0.0.2", 00:16:36.133 "trsvcid": "4420" 00:16:36.133 }, 00:16:36.133 "peer_address": { 00:16:36.133 "trtype": "TCP", 00:16:36.133 "adrfam": "IPv4", 00:16:36.133 "traddr": "10.0.0.1", 00:16:36.133 "trsvcid": "50926" 00:16:36.133 }, 00:16:36.133 "auth": { 00:16:36.133 "state": "completed", 00:16:36.133 "digest": "sha384", 00:16:36.133 "dhgroup": "ffdhe3072" 00:16:36.133 } 00:16:36.133 } 00:16:36.133 ]' 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.133 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:36.392 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:36.959 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.218 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.477 00:16:37.477 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.477 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.477 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.736 { 00:16:37.736 "cntlid": 67, 00:16:37.736 "qid": 0, 00:16:37.736 "state": "enabled", 00:16:37.736 "thread": "nvmf_tgt_poll_group_000", 00:16:37.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.736 "listen_address": { 00:16:37.736 "trtype": "TCP", 00:16:37.736 "adrfam": "IPv4", 00:16:37.736 "traddr": "10.0.0.2", 00:16:37.736 "trsvcid": "4420" 00:16:37.736 }, 00:16:37.736 "peer_address": { 00:16:37.736 "trtype": "TCP", 00:16:37.736 "adrfam": "IPv4", 00:16:37.736 "traddr": "10.0.0.1", 00:16:37.736 "trsvcid": "50972" 00:16:37.736 }, 00:16:37.736 "auth": { 00:16:37.736 "state": "completed", 00:16:37.736 "digest": "sha384", 00:16:37.736 "dhgroup": "ffdhe3072" 00:16:37.736 } 00:16:37.736 } 00:16:37.736 ]' 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.736 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.995 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.995 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.995 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.995 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.995 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.254 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:38.254 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.822 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.081 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.340 { 00:16:39.340 "cntlid": 69, 00:16:39.340 "qid": 0, 00:16:39.340 "state": "enabled", 00:16:39.340 "thread": "nvmf_tgt_poll_group_000", 00:16:39.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.340 "listen_address": { 00:16:39.340 "trtype": "TCP", 00:16:39.340 "adrfam": "IPv4", 00:16:39.340 "traddr": "10.0.0.2", 00:16:39.340 "trsvcid": "4420" 00:16:39.340 }, 00:16:39.340 "peer_address": { 00:16:39.340 "trtype": "TCP", 00:16:39.340 "adrfam": "IPv4", 00:16:39.340 "traddr": "10.0.0.1", 00:16:39.340 "trsvcid": "51002" 00:16:39.340 }, 00:16:39.340 "auth": { 00:16:39.340 "state": "completed", 00:16:39.340 "digest": "sha384", 00:16:39.340 "dhgroup": "ffdhe3072" 00:16:39.340 } 00:16:39.340 } 00:16:39.340 ]' 00:16:39.340 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.600 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.859 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:39.859 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.427 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.686 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.944 00:16:40.944 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.944 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.944 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.944 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.944 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.945 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.945 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.945 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.945 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.945 { 00:16:40.945 "cntlid": 71, 00:16:40.945 "qid": 0, 00:16:40.945 "state": "enabled", 00:16:40.945 "thread": "nvmf_tgt_poll_group_000", 00:16:40.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.945 "listen_address": { 00:16:40.945 "trtype": "TCP", 00:16:40.945 "adrfam": "IPv4", 00:16:40.945 "traddr": "10.0.0.2", 00:16:40.945 "trsvcid": "4420" 00:16:40.945 }, 00:16:40.945 "peer_address": { 00:16:40.945 "trtype": "TCP", 00:16:40.945 "adrfam": "IPv4", 00:16:40.945 "traddr": "10.0.0.1", 00:16:40.945 "trsvcid": "57620" 00:16:40.945 }, 00:16:40.945 "auth": { 00:16:40.945 "state": "completed", 00:16:40.945 "digest": "sha384", 00:16:40.945 "dhgroup": "ffdhe3072" 00:16:40.945 } 00:16:40.945 } 00:16:40.945 ]' 00:16:40.945 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.203 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.532 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:41.532 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.098 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.098 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.356 00:16:42.356 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.356 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.356 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.615 { 00:16:42.615 "cntlid": 73, 00:16:42.615 "qid": 0, 00:16:42.615 "state": "enabled", 00:16:42.615 "thread": "nvmf_tgt_poll_group_000", 00:16:42.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.615 "listen_address": { 00:16:42.615 "trtype": "TCP", 00:16:42.615 "adrfam": "IPv4", 00:16:42.615 "traddr": "10.0.0.2", 00:16:42.615 "trsvcid": "4420" 00:16:42.615 }, 00:16:42.615 "peer_address": { 00:16:42.615 "trtype": "TCP", 00:16:42.615 "adrfam": "IPv4", 00:16:42.615 "traddr": "10.0.0.1", 00:16:42.615 "trsvcid": "57644" 00:16:42.615 }, 00:16:42.615 "auth": { 00:16:42.615 "state": "completed", 00:16:42.615 "digest": "sha384", 00:16:42.615 "dhgroup": "ffdhe4096" 00:16:42.615 } 00:16:42.615 } 00:16:42.615 ]' 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.615 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.873 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.874 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.874 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.874 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.874 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.131 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:43.132 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.698 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.958 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.218 { 00:16:44.218 "cntlid": 75, 00:16:44.218 "qid": 0, 00:16:44.218 "state": "enabled", 00:16:44.218 "thread": "nvmf_tgt_poll_group_000", 00:16:44.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.218 "listen_address": { 00:16:44.218 "trtype": "TCP", 00:16:44.218 "adrfam": "IPv4", 00:16:44.218 "traddr": "10.0.0.2", 00:16:44.218 "trsvcid": "4420" 00:16:44.218 }, 00:16:44.218 "peer_address": { 00:16:44.218 "trtype": "TCP", 00:16:44.218 "adrfam": "IPv4", 00:16:44.218 "traddr": "10.0.0.1", 00:16:44.218 "trsvcid": "57664" 00:16:44.218 }, 00:16:44.218 "auth": { 00:16:44.218 "state": "completed", 00:16:44.218 "digest": "sha384", 00:16:44.218 "dhgroup": "ffdhe4096" 00:16:44.218 } 00:16:44.218 } 00:16:44.218 ]' 00:16:44.218 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.477 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.736 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:44.736 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.307 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.308 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.308 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.308 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.567 00:16:45.567 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.567 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.567 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.826 { 00:16:45.826 "cntlid": 77, 00:16:45.826 "qid": 0, 00:16:45.826 "state": "enabled", 00:16:45.826 "thread": "nvmf_tgt_poll_group_000", 00:16:45.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.826 "listen_address": { 00:16:45.826 "trtype": "TCP", 00:16:45.826 "adrfam": "IPv4", 00:16:45.826 "traddr": "10.0.0.2", 00:16:45.826 "trsvcid": "4420" 00:16:45.826 }, 00:16:45.826 "peer_address": { 00:16:45.826 "trtype": "TCP", 00:16:45.826 "adrfam": "IPv4", 00:16:45.826 "traddr": "10.0.0.1", 00:16:45.826 "trsvcid": "57692" 00:16:45.826 }, 00:16:45.826 "auth": { 00:16:45.826 "state": "completed", 00:16:45.826 "digest": "sha384", 00:16:45.826 "dhgroup": "ffdhe4096" 00:16:45.826 } 00:16:45.826 } 00:16:45.826 ]' 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.826 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.085 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.085 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.085 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.085 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.085 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.344 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:46.344 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.912 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.912 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.912 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.912 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.912 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.171 00:16:47.430 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.430 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.430 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.430 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.430 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.431 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.431 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.431 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.431 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.431 { 00:16:47.431 "cntlid": 79, 00:16:47.431 "qid": 0, 00:16:47.431 "state": "enabled", 00:16:47.431 "thread": "nvmf_tgt_poll_group_000", 00:16:47.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.431 "listen_address": { 00:16:47.431 "trtype": "TCP", 00:16:47.431 "adrfam": "IPv4", 00:16:47.431 "traddr": "10.0.0.2", 00:16:47.431 "trsvcid": "4420" 00:16:47.431 }, 00:16:47.431 "peer_address": { 00:16:47.431 "trtype": "TCP", 00:16:47.431 "adrfam": "IPv4", 00:16:47.431 "traddr": "10.0.0.1", 00:16:47.431 "trsvcid": "57716" 00:16:47.431 }, 00:16:47.431 "auth": { 00:16:47.431 "state": "completed", 00:16:47.431 "digest": "sha384", 00:16:47.431 "dhgroup": "ffdhe4096" 00:16:47.431 } 00:16:47.431 } 00:16:47.431 ]' 00:16:47.431 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.689 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.948 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:47.948 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.517 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.776 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.035 00:16:49.035 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.035 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.035 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.294 { 00:16:49.294 "cntlid": 81, 00:16:49.294 "qid": 0, 00:16:49.294 "state": "enabled", 00:16:49.294 "thread": "nvmf_tgt_poll_group_000", 00:16:49.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.294 "listen_address": { 00:16:49.294 "trtype": "TCP", 00:16:49.294 "adrfam": "IPv4", 00:16:49.294 "traddr": "10.0.0.2", 00:16:49.294 "trsvcid": "4420" 00:16:49.294 }, 00:16:49.294 "peer_address": { 00:16:49.294 "trtype": "TCP", 00:16:49.294 "adrfam": "IPv4", 00:16:49.294 "traddr": "10.0.0.1", 00:16:49.294 "trsvcid": "57746" 00:16:49.294 }, 00:16:49.294 "auth": { 00:16:49.294 "state": "completed", 00:16:49.294 "digest": "sha384", 00:16:49.294 "dhgroup": "ffdhe6144" 00:16:49.294 } 00:16:49.294 } 00:16:49.294 ]' 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.294 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.552 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:49.553 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:50.119 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.378 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.379 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.638 00:16:50.638 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.638 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.638 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.896 { 00:16:50.896 "cntlid": 83, 00:16:50.896 "qid": 0, 00:16:50.896 "state": "enabled", 00:16:50.896 "thread": "nvmf_tgt_poll_group_000", 00:16:50.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.896 "listen_address": { 00:16:50.896 "trtype": "TCP", 00:16:50.896 "adrfam": "IPv4", 00:16:50.896 "traddr": "10.0.0.2", 00:16:50.896 "trsvcid": "4420" 00:16:50.896 }, 00:16:50.896 "peer_address": { 00:16:50.896 "trtype": "TCP", 00:16:50.896 "adrfam": "IPv4", 00:16:50.896 "traddr": "10.0.0.1", 00:16:50.896 "trsvcid": "57784" 00:16:50.896 }, 00:16:50.896 "auth": { 00:16:50.896 "state": "completed", 00:16:50.896 "digest": "sha384", 00:16:50.896 "dhgroup": "ffdhe6144" 00:16:50.896 } 00:16:50.896 } 00:16:50.896 ]' 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.896 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.896 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.896 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.155 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.155 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.155 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.156 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:51.156 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:51.723 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.982 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.982 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.551 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.551 { 00:16:52.551 "cntlid": 85, 00:16:52.551 "qid": 0, 00:16:52.551 "state": "enabled", 00:16:52.551 "thread": "nvmf_tgt_poll_group_000", 00:16:52.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.551 "listen_address": { 00:16:52.551 "trtype": "TCP", 00:16:52.551 "adrfam": "IPv4", 00:16:52.551 "traddr": "10.0.0.2", 00:16:52.551 "trsvcid": "4420" 00:16:52.551 }, 00:16:52.551 "peer_address": { 00:16:52.551 "trtype": "TCP", 00:16:52.551 "adrfam": "IPv4", 00:16:52.551 "traddr": "10.0.0.1", 00:16:52.551 "trsvcid": "54248" 00:16:52.551 }, 00:16:52.551 "auth": { 00:16:52.551 "state": "completed", 00:16:52.551 "digest": "sha384", 00:16:52.551 "dhgroup": "ffdhe6144" 00:16:52.551 } 00:16:52.551 } 00:16:52.551 ]' 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.551 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.810 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.810 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.810 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.810 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.810 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.068 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:53.068 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:16:53.634 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.635 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.201 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.201 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.201 { 00:16:54.201 "cntlid": 87, 00:16:54.201 "qid": 0, 00:16:54.201 "state": "enabled", 00:16:54.201 "thread": "nvmf_tgt_poll_group_000", 00:16:54.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.201 "listen_address": { 00:16:54.201 "trtype": "TCP", 00:16:54.201 "adrfam": "IPv4", 00:16:54.201 "traddr": "10.0.0.2", 00:16:54.202 "trsvcid": "4420" 00:16:54.202 }, 00:16:54.202 "peer_address": { 00:16:54.202 "trtype": "TCP", 00:16:54.202 "adrfam": "IPv4", 00:16:54.202 "traddr": "10.0.0.1", 00:16:54.202 "trsvcid": "54268" 00:16:54.202 }, 00:16:54.202 "auth": { 00:16:54.202 "state": "completed", 00:16:54.202 "digest": "sha384", 00:16:54.202 "dhgroup": "ffdhe6144" 00:16:54.202 } 00:16:54.202 } 00:16:54.202 ]' 00:16:54.202 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.460 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.719 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:54.719 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.285 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.594 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.853 00:16:55.853 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.853 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.853 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.111 { 00:16:56.111 "cntlid": 89, 00:16:56.111 "qid": 0, 00:16:56.111 "state": "enabled", 00:16:56.111 "thread": "nvmf_tgt_poll_group_000", 00:16:56.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.111 "listen_address": { 00:16:56.111 "trtype": "TCP", 00:16:56.111 "adrfam": "IPv4", 00:16:56.111 "traddr": "10.0.0.2", 00:16:56.111 "trsvcid": "4420" 00:16:56.111 }, 00:16:56.111 "peer_address": { 00:16:56.111 "trtype": "TCP", 00:16:56.111 "adrfam": "IPv4", 00:16:56.111 "traddr": "10.0.0.1", 00:16:56.111 "trsvcid": "54304" 00:16:56.111 }, 00:16:56.111 "auth": { 00:16:56.111 "state": "completed", 00:16:56.111 "digest": "sha384", 00:16:56.111 "dhgroup": "ffdhe8192" 00:16:56.111 } 00:16:56.111 } 00:16:56.111 ]' 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.111 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.369 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.369 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.369 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.369 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:56.369 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.937 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.196 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.764 00:16:57.764 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.764 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.764 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.022 { 00:16:58.022 "cntlid": 91, 00:16:58.022 "qid": 0, 00:16:58.022 "state": "enabled", 00:16:58.022 "thread": "nvmf_tgt_poll_group_000", 00:16:58.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.022 "listen_address": { 00:16:58.022 "trtype": "TCP", 00:16:58.022 "adrfam": "IPv4", 00:16:58.022 "traddr": "10.0.0.2", 00:16:58.022 "trsvcid": "4420" 00:16:58.022 }, 00:16:58.022 "peer_address": { 00:16:58.022 "trtype": "TCP", 00:16:58.022 "adrfam": "IPv4", 00:16:58.022 "traddr": "10.0.0.1", 00:16:58.022 "trsvcid": "54340" 00:16:58.022 }, 00:16:58.022 "auth": { 00:16:58.022 "state": "completed", 00:16:58.022 "digest": "sha384", 00:16:58.022 "dhgroup": "ffdhe8192" 00:16:58.022 } 00:16:58.022 } 00:16:58.022 ]' 00:16:58.022 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.022 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.280 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:58.280 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.848 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.107 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.673 00:16:59.673 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.673 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.673 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.932 { 00:16:59.932 "cntlid": 93, 00:16:59.932 "qid": 0, 00:16:59.932 "state": "enabled", 00:16:59.932 "thread": "nvmf_tgt_poll_group_000", 00:16:59.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.932 "listen_address": { 00:16:59.932 "trtype": "TCP", 00:16:59.932 "adrfam": "IPv4", 00:16:59.932 "traddr": "10.0.0.2", 00:16:59.932 "trsvcid": "4420" 00:16:59.932 }, 00:16:59.932 "peer_address": { 00:16:59.932 "trtype": "TCP", 00:16:59.932 "adrfam": "IPv4", 00:16:59.932 "traddr": "10.0.0.1", 00:16:59.932 "trsvcid": "54374" 00:16:59.932 }, 00:16:59.932 "auth": { 00:16:59.932 "state": "completed", 00:16:59.932 "digest": "sha384", 00:16:59.932 "dhgroup": "ffdhe8192" 00:16:59.932 } 00:16:59.932 } 00:16:59.932 ]' 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.932 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.191 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:00.191 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.758 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.017 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.584 00:17:01.584 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.585 { 00:17:01.585 "cntlid": 95, 00:17:01.585 "qid": 0, 00:17:01.585 "state": "enabled", 00:17:01.585 "thread": "nvmf_tgt_poll_group_000", 00:17:01.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.585 "listen_address": { 00:17:01.585 "trtype": "TCP", 00:17:01.585 "adrfam": "IPv4", 00:17:01.585 "traddr": "10.0.0.2", 00:17:01.585 "trsvcid": "4420" 00:17:01.585 }, 00:17:01.585 "peer_address": { 00:17:01.585 "trtype": "TCP", 00:17:01.585 "adrfam": "IPv4", 00:17:01.585 "traddr": "10.0.0.1", 00:17:01.585 "trsvcid": "54244" 00:17:01.585 }, 00:17:01.585 "auth": { 00:17:01.585 "state": "completed", 00:17:01.585 "digest": "sha384", 00:17:01.585 "dhgroup": "ffdhe8192" 00:17:01.585 } 00:17:01.585 } 00:17:01.585 ]' 00:17:01.585 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.843 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.103 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:02.103 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:02.671 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.672 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.931 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.932 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.191 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.191 { 00:17:03.191 "cntlid": 97, 00:17:03.191 "qid": 0, 00:17:03.191 "state": "enabled", 00:17:03.191 "thread": "nvmf_tgt_poll_group_000", 00:17:03.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.191 "listen_address": { 00:17:03.191 "trtype": "TCP", 00:17:03.191 "adrfam": "IPv4", 00:17:03.191 "traddr": "10.0.0.2", 00:17:03.191 "trsvcid": "4420" 00:17:03.191 }, 00:17:03.191 "peer_address": { 00:17:03.191 "trtype": "TCP", 00:17:03.191 "adrfam": "IPv4", 00:17:03.191 "traddr": "10.0.0.1", 00:17:03.191 "trsvcid": "54276" 00:17:03.191 }, 00:17:03.191 "auth": { 00:17:03.191 "state": "completed", 00:17:03.191 "digest": "sha512", 00:17:03.191 "dhgroup": "null" 00:17:03.191 } 00:17:03.191 } 00:17:03.191 ]' 00:17:03.191 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.450 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.709 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:03.709 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.277 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.536 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.795 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.795 { 00:17:04.795 "cntlid": 99, 00:17:04.795 "qid": 0, 00:17:04.795 "state": "enabled", 00:17:04.795 "thread": "nvmf_tgt_poll_group_000", 00:17:04.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.795 "listen_address": { 00:17:04.795 "trtype": "TCP", 00:17:04.795 "adrfam": "IPv4", 00:17:04.795 "traddr": "10.0.0.2", 00:17:04.795 "trsvcid": "4420" 00:17:04.795 }, 00:17:04.795 "peer_address": { 00:17:04.795 "trtype": "TCP", 00:17:04.795 "adrfam": "IPv4", 00:17:04.795 "traddr": "10.0.0.1", 00:17:04.795 "trsvcid": "54292" 00:17:04.795 }, 00:17:04.795 "auth": { 00:17:04.795 "state": "completed", 00:17:04.795 "digest": "sha512", 00:17:04.795 "dhgroup": "null" 00:17:04.795 } 00:17:04.795 } 00:17:04.795 ]' 00:17:04.795 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.055 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.055 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.055 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.055 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.055 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.055 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.055 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.314 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:05.314 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.884 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.143 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.144 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.144 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.144 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.402 00:17:06.402 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.402 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.402 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.660 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.660 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.660 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.660 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.660 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.660 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.660 { 00:17:06.660 "cntlid": 101, 00:17:06.660 "qid": 0, 00:17:06.660 "state": "enabled", 00:17:06.660 "thread": "nvmf_tgt_poll_group_000", 00:17:06.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.660 "listen_address": { 00:17:06.660 "trtype": "TCP", 00:17:06.660 "adrfam": "IPv4", 00:17:06.660 "traddr": "10.0.0.2", 00:17:06.660 "trsvcid": "4420" 00:17:06.661 }, 00:17:06.661 "peer_address": { 00:17:06.661 "trtype": "TCP", 00:17:06.661 "adrfam": "IPv4", 00:17:06.661 "traddr": "10.0.0.1", 00:17:06.661 "trsvcid": "54314" 00:17:06.661 }, 00:17:06.661 "auth": { 00:17:06.661 "state": "completed", 00:17:06.661 "digest": "sha512", 00:17:06.661 "dhgroup": "null" 00:17:06.661 } 00:17:06.661 } 00:17:06.661 ]' 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.661 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.920 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:06.920 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.487 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.746 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.005 00:17:08.005 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.005 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.005 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.264 { 00:17:08.264 "cntlid": 103, 00:17:08.264 "qid": 0, 00:17:08.264 "state": "enabled", 00:17:08.264 "thread": "nvmf_tgt_poll_group_000", 00:17:08.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.264 "listen_address": { 00:17:08.264 "trtype": "TCP", 00:17:08.264 "adrfam": "IPv4", 00:17:08.264 "traddr": "10.0.0.2", 00:17:08.264 "trsvcid": "4420" 00:17:08.264 }, 00:17:08.264 "peer_address": { 00:17:08.264 "trtype": "TCP", 00:17:08.264 "adrfam": "IPv4", 00:17:08.264 "traddr": "10.0.0.1", 00:17:08.264 "trsvcid": "54344" 00:17:08.264 }, 00:17:08.264 "auth": { 00:17:08.264 "state": "completed", 00:17:08.264 "digest": "sha512", 00:17:08.264 "dhgroup": "null" 00:17:08.264 } 00:17:08.264 } 00:17:08.264 ]' 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.264 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.574 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:08.574 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:09.202 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.203 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.462 00:17:09.462 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.462 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.462 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.720 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.720 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.720 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.721 { 00:17:09.721 "cntlid": 105, 00:17:09.721 "qid": 0, 00:17:09.721 "state": "enabled", 00:17:09.721 "thread": "nvmf_tgt_poll_group_000", 00:17:09.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.721 "listen_address": { 00:17:09.721 "trtype": "TCP", 00:17:09.721 "adrfam": "IPv4", 00:17:09.721 "traddr": "10.0.0.2", 00:17:09.721 "trsvcid": "4420" 00:17:09.721 }, 00:17:09.721 "peer_address": { 00:17:09.721 "trtype": "TCP", 00:17:09.721 "adrfam": "IPv4", 00:17:09.721 "traddr": "10.0.0.1", 00:17:09.721 "trsvcid": "54364" 00:17:09.721 }, 00:17:09.721 "auth": { 00:17:09.721 "state": "completed", 00:17:09.721 "digest": "sha512", 00:17:09.721 "dhgroup": "ffdhe2048" 00:17:09.721 } 00:17:09.721 } 00:17:09.721 ]' 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.721 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.979 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.979 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.979 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.979 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:09.979 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:10.546 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.546 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.546 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.546 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.805 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.064 00:17:11.064 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.064 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.064 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.323 { 00:17:11.323 "cntlid": 107, 00:17:11.323 "qid": 0, 00:17:11.323 "state": "enabled", 00:17:11.323 "thread": "nvmf_tgt_poll_group_000", 00:17:11.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.323 "listen_address": { 00:17:11.323 "trtype": "TCP", 00:17:11.323 "adrfam": "IPv4", 00:17:11.323 "traddr": "10.0.0.2", 00:17:11.323 "trsvcid": "4420" 00:17:11.323 }, 00:17:11.323 "peer_address": { 00:17:11.323 "trtype": "TCP", 00:17:11.323 "adrfam": "IPv4", 00:17:11.323 "traddr": "10.0.0.1", 00:17:11.323 "trsvcid": "50274" 00:17:11.323 }, 00:17:11.323 "auth": { 00:17:11.323 "state": "completed", 00:17:11.323 "digest": "sha512", 00:17:11.323 "dhgroup": "ffdhe2048" 00:17:11.323 } 00:17:11.323 } 00:17:11.323 ]' 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.323 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.582 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.582 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.582 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.582 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:11.582 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.174 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.434 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.693 00:17:12.693 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.693 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.693 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.952 { 00:17:12.952 "cntlid": 109, 00:17:12.952 "qid": 0, 00:17:12.952 "state": "enabled", 00:17:12.952 "thread": "nvmf_tgt_poll_group_000", 00:17:12.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.952 "listen_address": { 00:17:12.952 "trtype": "TCP", 00:17:12.952 "adrfam": "IPv4", 00:17:12.952 "traddr": "10.0.0.2", 00:17:12.952 "trsvcid": "4420" 00:17:12.952 }, 00:17:12.952 "peer_address": { 00:17:12.952 "trtype": "TCP", 00:17:12.952 "adrfam": "IPv4", 00:17:12.952 "traddr": "10.0.0.1", 00:17:12.952 "trsvcid": "50298" 00:17:12.952 }, 00:17:12.952 "auth": { 00:17:12.952 "state": "completed", 00:17:12.952 "digest": "sha512", 00:17:12.952 "dhgroup": "ffdhe2048" 00:17:12.952 } 00:17:12.952 } 00:17:12.952 ]' 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.952 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.952 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.952 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.211 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.211 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.211 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.211 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:13.211 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.778 12:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.038 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.297 00:17:14.297 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.297 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.297 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.556 { 00:17:14.556 "cntlid": 111, 00:17:14.556 "qid": 0, 00:17:14.556 "state": "enabled", 00:17:14.556 "thread": "nvmf_tgt_poll_group_000", 00:17:14.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.556 "listen_address": { 00:17:14.556 "trtype": "TCP", 00:17:14.556 "adrfam": "IPv4", 00:17:14.556 "traddr": "10.0.0.2", 00:17:14.556 "trsvcid": "4420" 00:17:14.556 }, 00:17:14.556 "peer_address": { 00:17:14.556 "trtype": "TCP", 00:17:14.556 "adrfam": "IPv4", 00:17:14.556 "traddr": "10.0.0.1", 00:17:14.556 "trsvcid": "50330" 00:17:14.556 }, 00:17:14.556 "auth": { 00:17:14.556 "state": "completed", 00:17:14.556 "digest": "sha512", 00:17:14.556 "dhgroup": "ffdhe2048" 00:17:14.556 } 00:17:14.556 } 00:17:14.556 ]' 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.556 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.815 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.815 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.815 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.815 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:14.815 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.383 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.642 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.643 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.643 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.643 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.902 00:17:15.902 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.902 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.902 12:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.161 { 00:17:16.161 "cntlid": 113, 00:17:16.161 "qid": 0, 00:17:16.161 "state": "enabled", 00:17:16.161 "thread": "nvmf_tgt_poll_group_000", 00:17:16.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.161 "listen_address": { 00:17:16.161 "trtype": "TCP", 00:17:16.161 "adrfam": "IPv4", 00:17:16.161 "traddr": "10.0.0.2", 00:17:16.161 "trsvcid": "4420" 00:17:16.161 }, 00:17:16.161 "peer_address": { 00:17:16.161 "trtype": "TCP", 00:17:16.161 "adrfam": "IPv4", 00:17:16.161 "traddr": "10.0.0.1", 00:17:16.161 "trsvcid": "50346" 00:17:16.161 }, 00:17:16.161 "auth": { 00:17:16.161 "state": "completed", 00:17:16.161 "digest": "sha512", 00:17:16.161 "dhgroup": "ffdhe3072" 00:17:16.161 } 00:17:16.161 } 00:17:16.161 ]' 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.161 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.420 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.420 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.420 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.420 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:16.420 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.988 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.247 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.506 00:17:17.506 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.506 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.506 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.766 { 00:17:17.766 "cntlid": 115, 00:17:17.766 "qid": 0, 00:17:17.766 "state": "enabled", 00:17:17.766 "thread": "nvmf_tgt_poll_group_000", 00:17:17.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.766 "listen_address": { 00:17:17.766 "trtype": "TCP", 00:17:17.766 "adrfam": "IPv4", 00:17:17.766 "traddr": "10.0.0.2", 00:17:17.766 "trsvcid": "4420" 00:17:17.766 }, 00:17:17.766 "peer_address": { 00:17:17.766 "trtype": "TCP", 00:17:17.766 "adrfam": "IPv4", 00:17:17.766 "traddr": "10.0.0.1", 00:17:17.766 "trsvcid": "50390" 00:17:17.766 }, 00:17:17.766 "auth": { 00:17:17.766 "state": "completed", 00:17:17.766 "digest": "sha512", 00:17:17.766 "dhgroup": "ffdhe3072" 00:17:17.766 } 00:17:17.766 } 00:17:17.766 ]' 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.766 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.025 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.025 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.025 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.025 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:18.025 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.592 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.851 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.110 00:17:19.110 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.110 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.110 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.369 { 00:17:19.369 "cntlid": 117, 00:17:19.369 "qid": 0, 00:17:19.369 "state": "enabled", 00:17:19.369 "thread": "nvmf_tgt_poll_group_000", 00:17:19.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.369 "listen_address": { 00:17:19.369 "trtype": "TCP", 00:17:19.369 "adrfam": "IPv4", 00:17:19.369 "traddr": "10.0.0.2", 00:17:19.369 "trsvcid": "4420" 00:17:19.369 }, 00:17:19.369 "peer_address": { 00:17:19.369 "trtype": "TCP", 00:17:19.369 "adrfam": "IPv4", 00:17:19.369 "traddr": "10.0.0.1", 00:17:19.369 "trsvcid": "50430" 00:17:19.369 }, 00:17:19.369 "auth": { 00:17:19.369 "state": "completed", 00:17:19.369 "digest": "sha512", 00:17:19.369 "dhgroup": "ffdhe3072" 00:17:19.369 } 00:17:19.369 } 00:17:19.369 ]' 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.369 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.628 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.628 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.628 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.628 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.628 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:19.628 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:20.197 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.456 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.715 00:17:20.973 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.973 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.973 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.973 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.973 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.974 { 00:17:20.974 "cntlid": 119, 00:17:20.974 "qid": 0, 00:17:20.974 "state": "enabled", 00:17:20.974 "thread": "nvmf_tgt_poll_group_000", 00:17:20.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.974 "listen_address": { 00:17:20.974 "trtype": "TCP", 00:17:20.974 "adrfam": "IPv4", 00:17:20.974 "traddr": "10.0.0.2", 00:17:20.974 "trsvcid": "4420" 00:17:20.974 }, 00:17:20.974 "peer_address": { 00:17:20.974 "trtype": "TCP", 00:17:20.974 "adrfam": "IPv4", 00:17:20.974 "traddr": "10.0.0.1", 00:17:20.974 "trsvcid": "41720" 00:17:20.974 }, 00:17:20.974 "auth": { 00:17:20.974 "state": "completed", 00:17:20.974 "digest": "sha512", 00:17:20.974 "dhgroup": "ffdhe3072" 00:17:20.974 } 00:17:20.974 } 00:17:20.974 ]' 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.974 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.233 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.233 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.233 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.233 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.233 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.491 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:21.491 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.058 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.058 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.317 00:17:22.317 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.317 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.317 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.577 { 00:17:22.577 "cntlid": 121, 00:17:22.577 "qid": 0, 00:17:22.577 "state": "enabled", 00:17:22.577 "thread": "nvmf_tgt_poll_group_000", 00:17:22.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.577 "listen_address": { 00:17:22.577 "trtype": "TCP", 00:17:22.577 "adrfam": "IPv4", 00:17:22.577 "traddr": "10.0.0.2", 00:17:22.577 "trsvcid": "4420" 00:17:22.577 }, 00:17:22.577 "peer_address": { 00:17:22.577 "trtype": "TCP", 00:17:22.577 "adrfam": "IPv4", 00:17:22.577 "traddr": "10.0.0.1", 00:17:22.577 "trsvcid": "41762" 00:17:22.577 }, 00:17:22.577 "auth": { 00:17:22.577 "state": "completed", 00:17:22.577 "digest": "sha512", 00:17:22.577 "dhgroup": "ffdhe4096" 00:17:22.577 } 00:17:22.577 } 00:17:22.577 ]' 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.577 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:22.836 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:23.404 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.663 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.923 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.181 { 00:17:24.181 "cntlid": 123, 00:17:24.181 "qid": 0, 00:17:24.181 "state": "enabled", 00:17:24.181 "thread": "nvmf_tgt_poll_group_000", 00:17:24.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:24.181 "listen_address": { 00:17:24.181 "trtype": "TCP", 00:17:24.181 "adrfam": "IPv4", 00:17:24.181 "traddr": "10.0.0.2", 00:17:24.181 "trsvcid": "4420" 00:17:24.181 }, 00:17:24.181 "peer_address": { 00:17:24.181 "trtype": "TCP", 00:17:24.181 "adrfam": "IPv4", 00:17:24.181 "traddr": "10.0.0.1", 00:17:24.181 "trsvcid": "41804" 00:17:24.181 }, 00:17:24.181 "auth": { 00:17:24.181 "state": "completed", 00:17:24.181 "digest": "sha512", 00:17:24.181 "dhgroup": "ffdhe4096" 00:17:24.181 } 00:17:24.181 } 00:17:24.181 ]' 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.181 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.440 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.440 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.440 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.440 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.440 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.699 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:24.699 12:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.267 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.526 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.526 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.526 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.526 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.785 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.785 { 00:17:25.785 "cntlid": 125, 00:17:25.785 "qid": 0, 00:17:25.785 "state": "enabled", 00:17:25.785 "thread": "nvmf_tgt_poll_group_000", 00:17:25.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.785 "listen_address": { 00:17:25.785 "trtype": "TCP", 00:17:25.785 "adrfam": "IPv4", 00:17:25.785 "traddr": "10.0.0.2", 00:17:25.785 "trsvcid": "4420" 00:17:25.785 }, 00:17:25.785 "peer_address": { 00:17:25.785 "trtype": "TCP", 00:17:25.785 "adrfam": "IPv4", 00:17:25.785 "traddr": "10.0.0.1", 00:17:25.785 "trsvcid": "41814" 00:17:25.785 }, 00:17:25.785 "auth": { 00:17:25.785 "state": "completed", 00:17:25.785 "digest": "sha512", 00:17:25.785 "dhgroup": "ffdhe4096" 00:17:25.785 } 00:17:25.785 } 00:17:25.785 ]' 00:17:25.785 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.044 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.044 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.044 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.044 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.044 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.044 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.044 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.311 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:26.311 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.879 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.138 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.397 00:17:27.397 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.397 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.397 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.656 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.656 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.656 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.656 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.657 { 00:17:27.657 "cntlid": 127, 00:17:27.657 "qid": 0, 00:17:27.657 "state": "enabled", 00:17:27.657 "thread": "nvmf_tgt_poll_group_000", 00:17:27.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.657 "listen_address": { 00:17:27.657 "trtype": "TCP", 00:17:27.657 "adrfam": "IPv4", 00:17:27.657 "traddr": "10.0.0.2", 00:17:27.657 "trsvcid": "4420" 00:17:27.657 }, 00:17:27.657 "peer_address": { 00:17:27.657 "trtype": "TCP", 00:17:27.657 "adrfam": "IPv4", 00:17:27.657 "traddr": "10.0.0.1", 00:17:27.657 "trsvcid": "41844" 00:17:27.657 }, 00:17:27.657 "auth": { 00:17:27.657 "state": "completed", 00:17:27.657 "digest": "sha512", 00:17:27.657 "dhgroup": "ffdhe4096" 00:17:27.657 } 00:17:27.657 } 00:17:27.657 ]' 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.657 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.916 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:27.916 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:28.484 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.484 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.484 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.484 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.485 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.485 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.485 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.485 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.485 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.744 12:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.002 00:17:29.002 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.002 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.002 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.261 { 00:17:29.261 "cntlid": 129, 00:17:29.261 "qid": 0, 00:17:29.261 "state": "enabled", 00:17:29.261 "thread": "nvmf_tgt_poll_group_000", 00:17:29.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:29.261 "listen_address": { 00:17:29.261 "trtype": "TCP", 00:17:29.261 "adrfam": "IPv4", 00:17:29.261 "traddr": "10.0.0.2", 00:17:29.261 "trsvcid": "4420" 00:17:29.261 }, 00:17:29.261 "peer_address": { 00:17:29.261 "trtype": "TCP", 00:17:29.261 "adrfam": "IPv4", 00:17:29.261 "traddr": "10.0.0.1", 00:17:29.261 "trsvcid": "41874" 00:17:29.261 }, 00:17:29.261 "auth": { 00:17:29.261 "state": "completed", 00:17:29.261 "digest": "sha512", 00:17:29.261 "dhgroup": "ffdhe6144" 00:17:29.261 } 00:17:29.261 } 00:17:29.261 ]' 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.261 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.262 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.262 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.262 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.521 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.521 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.521 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.521 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:29.521 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.088 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.347 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.916 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.916 { 00:17:30.916 "cntlid": 131, 00:17:30.916 "qid": 0, 00:17:30.916 "state": "enabled", 00:17:30.916 "thread": "nvmf_tgt_poll_group_000", 00:17:30.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.916 "listen_address": { 00:17:30.916 "trtype": "TCP", 00:17:30.916 "adrfam": "IPv4", 00:17:30.916 "traddr": "10.0.0.2", 00:17:30.916 "trsvcid": "4420" 00:17:30.916 }, 00:17:30.916 "peer_address": { 00:17:30.916 "trtype": "TCP", 00:17:30.916 "adrfam": "IPv4", 00:17:30.916 "traddr": "10.0.0.1", 00:17:30.916 "trsvcid": "41902" 00:17:30.916 }, 00:17:30.916 "auth": { 00:17:30.916 "state": "completed", 00:17:30.916 "digest": "sha512", 00:17:30.916 "dhgroup": "ffdhe6144" 00:17:30.916 } 00:17:30.916 } 00:17:30.916 ]' 00:17:30.916 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.916 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.916 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.175 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.175 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.175 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.175 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.175 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.434 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:31.434 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.003 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.003 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.570 00:17:32.570 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.570 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.571 { 00:17:32.571 "cntlid": 133, 00:17:32.571 "qid": 0, 00:17:32.571 "state": "enabled", 00:17:32.571 "thread": "nvmf_tgt_poll_group_000", 00:17:32.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.571 "listen_address": { 00:17:32.571 "trtype": "TCP", 00:17:32.571 "adrfam": "IPv4", 00:17:32.571 "traddr": "10.0.0.2", 00:17:32.571 "trsvcid": "4420" 00:17:32.571 }, 00:17:32.571 "peer_address": { 00:17:32.571 "trtype": "TCP", 00:17:32.571 "adrfam": "IPv4", 00:17:32.571 "traddr": "10.0.0.1", 00:17:32.571 "trsvcid": "34618" 00:17:32.571 }, 00:17:32.571 "auth": { 00:17:32.571 "state": "completed", 00:17:32.571 "digest": "sha512", 00:17:32.571 "dhgroup": "ffdhe6144" 00:17:32.571 } 00:17:32.571 } 00:17:32.571 ]' 00:17:32.571 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.830 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.088 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:33.089 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.658 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.917 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.917 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.917 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.176 00:17:34.176 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.176 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.176 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.436 { 00:17:34.436 "cntlid": 135, 00:17:34.436 "qid": 0, 00:17:34.436 "state": "enabled", 00:17:34.436 "thread": "nvmf_tgt_poll_group_000", 00:17:34.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.436 "listen_address": { 00:17:34.436 "trtype": "TCP", 00:17:34.436 "adrfam": "IPv4", 00:17:34.436 "traddr": "10.0.0.2", 00:17:34.436 "trsvcid": "4420" 00:17:34.436 }, 00:17:34.436 "peer_address": { 00:17:34.436 "trtype": "TCP", 00:17:34.436 "adrfam": "IPv4", 00:17:34.436 "traddr": "10.0.0.1", 00:17:34.436 "trsvcid": "34640" 00:17:34.436 }, 00:17:34.436 "auth": { 00:17:34.436 "state": "completed", 00:17:34.436 "digest": "sha512", 00:17:34.436 "dhgroup": "ffdhe6144" 00:17:34.436 } 00:17:34.436 } 00:17:34.436 ]' 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.436 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.695 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:34.695 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.263 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.521 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.090 00:17:36.090 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.090 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.090 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.090 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.090 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.090 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.090 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.348 { 00:17:36.348 "cntlid": 137, 00:17:36.348 "qid": 0, 00:17:36.348 "state": "enabled", 00:17:36.348 "thread": "nvmf_tgt_poll_group_000", 00:17:36.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.348 "listen_address": { 00:17:36.348 "trtype": "TCP", 00:17:36.348 "adrfam": "IPv4", 00:17:36.348 "traddr": "10.0.0.2", 00:17:36.348 "trsvcid": "4420" 00:17:36.348 }, 00:17:36.348 "peer_address": { 00:17:36.348 "trtype": "TCP", 00:17:36.348 "adrfam": "IPv4", 00:17:36.348 "traddr": "10.0.0.1", 00:17:36.348 "trsvcid": "34670" 00:17:36.348 }, 00:17:36.348 "auth": { 00:17:36.348 "state": "completed", 00:17:36.348 "digest": "sha512", 00:17:36.348 "dhgroup": "ffdhe8192" 00:17:36.348 } 00:17:36.348 } 00:17:36.348 ]' 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.348 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.606 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:36.606 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.174 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.433 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.693 00:17:37.952 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.952 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.952 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.952 { 00:17:37.952 "cntlid": 139, 00:17:37.952 "qid": 0, 00:17:37.952 "state": "enabled", 00:17:37.952 "thread": "nvmf_tgt_poll_group_000", 00:17:37.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.952 "listen_address": { 00:17:37.952 "trtype": "TCP", 00:17:37.952 "adrfam": "IPv4", 00:17:37.952 "traddr": "10.0.0.2", 00:17:37.952 "trsvcid": "4420" 00:17:37.952 }, 00:17:37.952 "peer_address": { 00:17:37.952 "trtype": "TCP", 00:17:37.952 "adrfam": "IPv4", 00:17:37.952 "traddr": "10.0.0.1", 00:17:37.952 "trsvcid": "34692" 00:17:37.952 }, 00:17:37.952 "auth": { 00:17:37.952 "state": "completed", 00:17:37.952 "digest": "sha512", 00:17:37.952 "dhgroup": "ffdhe8192" 00:17:37.952 } 00:17:37.952 } 00:17:37.952 ]' 00:17:37.952 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.211 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.470 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:38.470 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: --dhchap-ctrl-secret DHHC-1:02:ODg2YWE5ZTEyNGZkYTQxN2VhMzM2NDI1ZDBmYzYzMTA2MTY0ZWUwYzI4NjY4NjBlLOOVBw==: 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.038 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.038 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.039 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.039 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.607 00:17:39.607 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.607 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.607 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.867 { 00:17:39.867 "cntlid": 141, 00:17:39.867 "qid": 0, 00:17:39.867 "state": "enabled", 00:17:39.867 "thread": "nvmf_tgt_poll_group_000", 00:17:39.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.867 "listen_address": { 00:17:39.867 "trtype": "TCP", 00:17:39.867 "adrfam": "IPv4", 00:17:39.867 "traddr": "10.0.0.2", 00:17:39.867 "trsvcid": "4420" 00:17:39.867 }, 00:17:39.867 "peer_address": { 00:17:39.867 "trtype": "TCP", 00:17:39.867 "adrfam": "IPv4", 00:17:39.867 "traddr": "10.0.0.1", 00:17:39.867 "trsvcid": "34736" 00:17:39.867 }, 00:17:39.867 "auth": { 00:17:39.867 "state": "completed", 00:17:39.867 "digest": "sha512", 00:17:39.867 "dhgroup": "ffdhe8192" 00:17:39.867 } 00:17:39.867 } 00:17:39.867 ]' 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.867 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.126 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.126 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.126 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.126 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:40.126 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:01:YTkyMjMzM2U4YTQ2YTBhYWY3ZTExNmZiYjhmNjgzZDjAkhFi: 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.695 12:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.954 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.523 00:17:41.523 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.523 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.523 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.782 { 00:17:41.782 "cntlid": 143, 00:17:41.782 "qid": 0, 00:17:41.782 "state": "enabled", 00:17:41.782 "thread": "nvmf_tgt_poll_group_000", 00:17:41.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.782 "listen_address": { 00:17:41.782 "trtype": "TCP", 00:17:41.782 "adrfam": "IPv4", 00:17:41.782 "traddr": "10.0.0.2", 00:17:41.782 "trsvcid": "4420" 00:17:41.782 }, 00:17:41.782 "peer_address": { 00:17:41.782 "trtype": "TCP", 00:17:41.782 "adrfam": "IPv4", 00:17:41.782 "traddr": "10.0.0.1", 00:17:41.782 "trsvcid": "40630" 00:17:41.782 }, 00:17:41.782 "auth": { 00:17:41.782 "state": "completed", 00:17:41.782 "digest": "sha512", 00:17:41.782 "dhgroup": "ffdhe8192" 00:17:41.782 } 00:17:41.782 } 00:17:41.782 ]' 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.782 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.041 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:42.041 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.626 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.884 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.450 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.450 { 00:17:43.450 "cntlid": 145, 00:17:43.450 "qid": 0, 00:17:43.450 "state": "enabled", 00:17:43.450 "thread": "nvmf_tgt_poll_group_000", 00:17:43.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.450 "listen_address": { 00:17:43.450 "trtype": "TCP", 00:17:43.450 "adrfam": "IPv4", 00:17:43.450 "traddr": "10.0.0.2", 00:17:43.450 "trsvcid": "4420" 00:17:43.450 }, 00:17:43.450 "peer_address": { 00:17:43.450 "trtype": "TCP", 00:17:43.450 "adrfam": "IPv4", 00:17:43.450 "traddr": "10.0.0.1", 00:17:43.450 "trsvcid": "40650" 00:17:43.450 }, 00:17:43.450 "auth": { 00:17:43.450 "state": "completed", 00:17:43.450 "digest": "sha512", 00:17:43.450 "dhgroup": "ffdhe8192" 00:17:43.450 } 00:17:43.450 } 00:17:43.450 ]' 00:17:43.450 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.709 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:43.967 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2NhZDc2MzQyYmZiZjUyMDY5NTQxZjAyMzQyYzQyMzJiZmM2Mjc3ZjYxNzdiZDI1EI+FEQ==: --dhchap-ctrl-secret DHHC-1:03:NDg2YjY1YjkwNTg0NzUyOGQ2NmE2Zjg4MDViZjg3YWNiMzhjNjU4NjBkZTg5MWMwNGNmZGM2ODgxZjZhODEwYr4qW6w=: 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:44.534 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:45.102 request: 00:17:45.102 { 00:17:45.102 "name": "nvme0", 00:17:45.102 "trtype": "tcp", 00:17:45.102 "traddr": "10.0.0.2", 00:17:45.102 "adrfam": "ipv4", 00:17:45.102 "trsvcid": "4420", 00:17:45.102 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:45.102 "prchk_reftag": false, 00:17:45.102 "prchk_guard": false, 00:17:45.102 "hdgst": false, 00:17:45.102 "ddgst": false, 00:17:45.102 "dhchap_key": "key2", 00:17:45.102 "allow_unrecognized_csi": false, 00:17:45.102 "method": "bdev_nvme_attach_controller", 00:17:45.102 "req_id": 1 00:17:45.102 } 00:17:45.102 Got JSON-RPC error response 00:17:45.102 response: 00:17:45.102 { 00:17:45.102 "code": -5, 00:17:45.102 "message": "Input/output error" 00:17:45.102 } 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:45.102 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:45.361 request: 00:17:45.361 { 00:17:45.361 "name": "nvme0", 00:17:45.361 "trtype": "tcp", 00:17:45.361 "traddr": "10.0.0.2", 00:17:45.361 "adrfam": "ipv4", 00:17:45.361 "trsvcid": "4420", 00:17:45.361 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:45.361 "prchk_reftag": false, 00:17:45.361 "prchk_guard": false, 00:17:45.361 "hdgst": false, 00:17:45.361 "ddgst": false, 00:17:45.361 "dhchap_key": "key1", 00:17:45.361 "dhchap_ctrlr_key": "ckey2", 00:17:45.361 "allow_unrecognized_csi": false, 00:17:45.361 "method": "bdev_nvme_attach_controller", 00:17:45.361 "req_id": 1 00:17:45.361 } 00:17:45.361 Got JSON-RPC error response 00:17:45.361 response: 00:17:45.361 { 00:17:45.361 "code": -5, 00:17:45.361 "message": "Input/output error" 00:17:45.361 } 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.361 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.001 request: 00:17:46.001 { 00:17:46.001 "name": "nvme0", 00:17:46.001 "trtype": "tcp", 00:17:46.001 "traddr": "10.0.0.2", 00:17:46.001 "adrfam": "ipv4", 00:17:46.001 "trsvcid": "4420", 00:17:46.001 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.001 "prchk_reftag": false, 00:17:46.001 "prchk_guard": false, 00:17:46.001 "hdgst": false, 00:17:46.001 "ddgst": false, 00:17:46.001 "dhchap_key": "key1", 00:17:46.001 "dhchap_ctrlr_key": "ckey1", 00:17:46.001 "allow_unrecognized_csi": false, 00:17:46.001 "method": "bdev_nvme_attach_controller", 00:17:46.001 "req_id": 1 00:17:46.001 } 00:17:46.001 Got JSON-RPC error response 00:17:46.001 response: 00:17:46.001 { 00:17:46.001 "code": -5, 00:17:46.001 "message": "Input/output error" 00:17:46.001 } 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 421360 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 421360 ']' 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 421360 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421360 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421360' 00:17:46.001 killing process with pid 421360 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 421360 00:17:46.001 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 421360 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=443753 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 443753 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 443753 ']' 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.269 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 443753 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 443753 ']' 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.537 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.796 null0 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OTn 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.92O ]] 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.92O 00:17:46.796 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vp6 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.k1p ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k1p 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Mbj 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ivd ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ivd 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KqB 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.797 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.734 nvme0n1 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.734 { 00:17:47.734 "cntlid": 1, 00:17:47.734 "qid": 0, 00:17:47.734 "state": "enabled", 00:17:47.734 "thread": "nvmf_tgt_poll_group_000", 00:17:47.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.734 "listen_address": { 00:17:47.734 "trtype": "TCP", 00:17:47.734 "adrfam": "IPv4", 00:17:47.734 "traddr": "10.0.0.2", 00:17:47.734 "trsvcid": "4420" 00:17:47.734 }, 00:17:47.734 "peer_address": { 00:17:47.734 "trtype": "TCP", 00:17:47.734 "adrfam": "IPv4", 00:17:47.734 "traddr": "10.0.0.1", 00:17:47.734 "trsvcid": "40702" 00:17:47.734 }, 00:17:47.734 "auth": { 00:17:47.734 "state": "completed", 00:17:47.734 "digest": "sha512", 00:17:47.734 "dhgroup": "ffdhe8192" 00:17:47.734 } 00:17:47.734 } 00:17:47.734 ]' 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.734 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.993 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.993 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.993 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.993 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.993 12:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.993 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:47.993 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:48.561 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.820 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:49.078 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.078 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.078 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.078 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.078 request: 00:17:49.078 { 00:17:49.078 "name": "nvme0", 00:17:49.078 "trtype": "tcp", 00:17:49.078 "traddr": "10.0.0.2", 00:17:49.078 "adrfam": "ipv4", 00:17:49.078 "trsvcid": "4420", 00:17:49.078 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:49.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.078 "prchk_reftag": false, 00:17:49.078 "prchk_guard": false, 00:17:49.078 "hdgst": false, 00:17:49.078 "ddgst": false, 00:17:49.078 "dhchap_key": "key3", 00:17:49.078 "allow_unrecognized_csi": false, 00:17:49.078 "method": "bdev_nvme_attach_controller", 00:17:49.078 "req_id": 1 00:17:49.078 } 00:17:49.078 Got JSON-RPC error response 00:17:49.078 response: 00:17:49.078 { 00:17:49.078 "code": -5, 00:17:49.078 "message": "Input/output error" 00:17:49.078 } 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:49.078 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.335 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.594 request: 00:17:49.594 { 00:17:49.594 "name": "nvme0", 00:17:49.594 "trtype": "tcp", 00:17:49.594 "traddr": "10.0.0.2", 00:17:49.594 "adrfam": "ipv4", 00:17:49.594 "trsvcid": "4420", 00:17:49.594 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:49.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.594 "prchk_reftag": false, 00:17:49.594 "prchk_guard": false, 00:17:49.594 "hdgst": false, 00:17:49.594 "ddgst": false, 00:17:49.594 "dhchap_key": "key3", 00:17:49.594 "allow_unrecognized_csi": false, 00:17:49.594 "method": "bdev_nvme_attach_controller", 00:17:49.594 "req_id": 1 00:17:49.594 } 00:17:49.594 Got JSON-RPC error response 00:17:49.594 response: 00:17:49.594 { 00:17:49.594 "code": -5, 00:17:49.594 "message": "Input/output error" 00:17:49.594 } 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.594 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:49.853 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:49.854 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:50.110 request: 00:17:50.110 { 00:17:50.110 "name": "nvme0", 00:17:50.110 "trtype": "tcp", 00:17:50.110 "traddr": "10.0.0.2", 00:17:50.110 "adrfam": "ipv4", 00:17:50.110 "trsvcid": "4420", 00:17:50.110 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.110 "prchk_reftag": false, 00:17:50.110 "prchk_guard": false, 00:17:50.111 "hdgst": false, 00:17:50.111 "ddgst": false, 00:17:50.111 "dhchap_key": "key0", 00:17:50.111 "dhchap_ctrlr_key": "key1", 00:17:50.111 "allow_unrecognized_csi": false, 00:17:50.111 "method": "bdev_nvme_attach_controller", 00:17:50.111 "req_id": 1 00:17:50.111 } 00:17:50.111 Got JSON-RPC error response 00:17:50.111 response: 00:17:50.111 { 00:17:50.111 "code": -5, 00:17:50.111 "message": "Input/output error" 00:17:50.111 } 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:50.111 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:50.367 nvme0n1 00:17:50.367 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:50.367 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:50.367 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.629 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.629 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.629 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:50.887 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:51.453 nvme0n1 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:51.711 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.970 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.970 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:51.970 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: --dhchap-ctrl-secret DHHC-1:03:YzExM2U5NjQzZWMwM2ExMTk3MTk5MDlkMTgwZjUxZDM5ZjM2NTAyMWQzMDMxM2M0YTk1N2M5MmNmNjA1ZTEzZQLW1bE=: 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.537 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:52.794 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.360 request: 00:17:53.360 { 00:17:53.360 "name": "nvme0", 00:17:53.360 "trtype": "tcp", 00:17:53.360 "traddr": "10.0.0.2", 00:17:53.360 "adrfam": "ipv4", 00:17:53.360 "trsvcid": "4420", 00:17:53.360 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:53.360 "prchk_reftag": false, 00:17:53.360 "prchk_guard": false, 00:17:53.360 "hdgst": false, 00:17:53.360 "ddgst": false, 00:17:53.360 "dhchap_key": "key1", 00:17:53.360 "allow_unrecognized_csi": false, 00:17:53.360 "method": "bdev_nvme_attach_controller", 00:17:53.360 "req_id": 1 00:17:53.360 } 00:17:53.360 Got JSON-RPC error response 00:17:53.360 response: 00:17:53.360 { 00:17:53.360 "code": -5, 00:17:53.360 "message": "Input/output error" 00:17:53.360 } 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.360 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.925 nvme0n1 00:17:53.925 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:53.925 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:53.925 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.184 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.184 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.184 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:54.443 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:54.701 nvme0n1 00:17:54.701 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:54.701 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:54.701 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.959 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.959 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.959 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: '' 2s 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: ]] 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTY0Yzk3YjQzZmZlOWRmZjcxMjNhNGI5ZWFmNWZhNmG87nUS: 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:55.218 12:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:57.123 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:57.123 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:57.123 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:57.123 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:57.123 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: 2s 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: ]] 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjE3YzM3NGU3MDQxY2RjNTQ1Yzg3MmE2ZTBkMmNlYWZkNGJkMGY5NTg0NTRjYjBiXHbLsQ==: 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:57.124 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:59.661 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:59.920 nvme0n1 00:17:59.920 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.920 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.920 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.920 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.920 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.920 12:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.488 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:00.488 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:00.488 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:00.746 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:01.005 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:01.005 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:01.005 12:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.005 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.572 request: 00:18:01.572 { 00:18:01.572 "name": "nvme0", 00:18:01.572 "dhchap_key": "key1", 00:18:01.572 "dhchap_ctrlr_key": "key3", 00:18:01.572 "method": "bdev_nvme_set_keys", 00:18:01.572 "req_id": 1 00:18:01.572 } 00:18:01.572 Got JSON-RPC error response 00:18:01.572 response: 00:18:01.572 { 00:18:01.572 "code": -13, 00:18:01.572 "message": "Permission denied" 00:18:01.572 } 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:01.572 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.831 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:01.831 12:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:02.767 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:02.767 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:02.767 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.026 12:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.594 nvme0n1 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.594 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:03.853 12:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:04.111 request: 00:18:04.112 { 00:18:04.112 "name": "nvme0", 00:18:04.112 "dhchap_key": "key2", 00:18:04.112 "dhchap_ctrlr_key": "key0", 00:18:04.112 "method": "bdev_nvme_set_keys", 00:18:04.112 "req_id": 1 00:18:04.112 } 00:18:04.112 Got JSON-RPC error response 00:18:04.112 response: 00:18:04.112 { 00:18:04.112 "code": -13, 00:18:04.112 "message": "Permission denied" 00:18:04.112 } 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:04.112 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.370 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:04.370 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:05.306 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:05.306 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:05.306 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 421382 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 421382 ']' 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 421382 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421382 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421382' 00:18:05.564 killing process with pid 421382 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 421382 00:18:05.564 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 421382 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.132 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.132 rmmod nvme_tcp 00:18:06.132 rmmod nvme_fabrics 00:18:06.132 rmmod nvme_keyring 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 443753 ']' 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 443753 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 443753 ']' 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 443753 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443753 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443753' 00:18:06.132 killing process with pid 443753 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 443753 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 443753 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.132 12:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.OTn /tmp/spdk.key-sha256.vp6 /tmp/spdk.key-sha384.Mbj /tmp/spdk.key-sha512.KqB /tmp/spdk.key-sha512.92O /tmp/spdk.key-sha384.k1p /tmp/spdk.key-sha256.ivd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:08.668 00:18:08.668 real 2m34.135s 00:18:08.668 user 5m55.507s 00:18:08.668 sys 0m24.584s 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.668 ************************************ 00:18:08.668 END TEST nvmf_auth_target 00:18:08.668 ************************************ 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:08.668 ************************************ 00:18:08.668 START TEST nvmf_bdevio_no_huge 00:18:08.668 ************************************ 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:08.668 * Looking for test storage... 00:18:08.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.668 --rc genhtml_branch_coverage=1 00:18:08.668 --rc genhtml_function_coverage=1 00:18:08.668 --rc genhtml_legend=1 00:18:08.668 --rc geninfo_all_blocks=1 00:18:08.668 --rc geninfo_unexecuted_blocks=1 00:18:08.668 00:18:08.668 ' 00:18:08.668 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.668 --rc genhtml_branch_coverage=1 00:18:08.668 --rc genhtml_function_coverage=1 00:18:08.668 --rc genhtml_legend=1 00:18:08.668 --rc geninfo_all_blocks=1 00:18:08.668 --rc geninfo_unexecuted_blocks=1 00:18:08.668 00:18:08.668 ' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.669 --rc genhtml_branch_coverage=1 00:18:08.669 --rc genhtml_function_coverage=1 00:18:08.669 --rc genhtml_legend=1 00:18:08.669 --rc geninfo_all_blocks=1 00:18:08.669 --rc geninfo_unexecuted_blocks=1 00:18:08.669 00:18:08.669 ' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.669 --rc genhtml_branch_coverage=1 00:18:08.669 --rc genhtml_function_coverage=1 00:18:08.669 --rc genhtml_legend=1 00:18:08.669 --rc geninfo_all_blocks=1 00:18:08.669 --rc geninfo_unexecuted_blocks=1 00:18:08.669 00:18:08.669 ' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:08.669 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:15.239 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:15.239 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.239 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:15.240 Found net devices under 0000:86:00.0: cvl_0_0 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:15.240 Found net devices under 0000:86:00.1: cvl_0_1 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:15.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:18:15.240 00:18:15.240 --- 10.0.0.2 ping statistics --- 00:18:15.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.240 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:18:15.240 00:18:15.240 --- 10.0.0.1 ping statistics --- 00:18:15.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.240 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=450716 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 450716 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 450716 ']' 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.240 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.240 [2024-11-20 12:27:57.578632] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:15.240 [2024-11-20 12:27:57.578686] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:15.240 [2024-11-20 12:27:57.668114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.240 [2024-11-20 12:27:57.716088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.240 [2024-11-20 12:27:57.716122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.240 [2024-11-20 12:27:57.716129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.240 [2024-11-20 12:27:57.716135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.240 [2024-11-20 12:27:57.716140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.240 [2024-11-20 12:27:57.717279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:15.240 [2024-11-20 12:27:57.717395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:15.240 [2024-11-20 12:27:57.717503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.240 [2024-11-20 12:27:57.717504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.499 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.500 [2024-11-20 12:27:58.475671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.500 Malloc0 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:15.500 [2024-11-20 12:27:58.515928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:15.500 { 00:18:15.500 "params": { 00:18:15.500 "name": "Nvme$subsystem", 00:18:15.500 "trtype": "$TEST_TRANSPORT", 00:18:15.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.500 "adrfam": "ipv4", 00:18:15.500 "trsvcid": "$NVMF_PORT", 00:18:15.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.500 "hdgst": ${hdgst:-false}, 00:18:15.500 "ddgst": ${ddgst:-false} 00:18:15.500 }, 00:18:15.500 "method": "bdev_nvme_attach_controller" 00:18:15.500 } 00:18:15.500 EOF 00:18:15.500 )") 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:15.500 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:15.500 "params": { 00:18:15.500 "name": "Nvme1", 00:18:15.500 "trtype": "tcp", 00:18:15.500 "traddr": "10.0.0.2", 00:18:15.500 "adrfam": "ipv4", 00:18:15.500 "trsvcid": "4420", 00:18:15.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.500 "hdgst": false, 00:18:15.500 "ddgst": false 00:18:15.500 }, 00:18:15.500 "method": "bdev_nvme_attach_controller" 00:18:15.500 }' 00:18:15.500 [2024-11-20 12:27:58.566871] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:15.500 [2024-11-20 12:27:58.566919] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid450758 ] 00:18:15.760 [2024-11-20 12:27:58.649009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:15.760 [2024-11-20 12:27:58.698454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.760 [2024-11-20 12:27:58.698560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.760 [2024-11-20 12:27:58.698561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.018 I/O targets: 00:18:16.018 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:16.018 00:18:16.018 00:18:16.018 CUnit - A unit testing framework for C - Version 2.1-3 00:18:16.018 http://cunit.sourceforge.net/ 00:18:16.018 00:18:16.018 00:18:16.018 Suite: bdevio tests on: Nvme1n1 00:18:16.018 Test: blockdev write read block ...passed 00:18:16.018 Test: blockdev write zeroes read block ...passed 00:18:16.018 Test: blockdev write zeroes read no split ...passed 00:18:16.275 Test: blockdev write zeroes read split ...passed 00:18:16.275 Test: blockdev write zeroes read split partial ...passed 00:18:16.275 Test: blockdev reset ...[2024-11-20 12:27:59.156543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:16.275 [2024-11-20 12:27:59.156606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ce920 (9): Bad file descriptor 00:18:16.275 [2024-11-20 12:27:59.168243] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:16.275 passed 00:18:16.275 Test: blockdev write read 8 blocks ...passed 00:18:16.275 Test: blockdev write read size > 128k ...passed 00:18:16.275 Test: blockdev write read invalid size ...passed 00:18:16.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:16.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:16.275 Test: blockdev write read max offset ...passed 00:18:16.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:16.275 Test: blockdev writev readv 8 blocks ...passed 00:18:16.275 Test: blockdev writev readv 30 x 1block ...passed 00:18:16.533 Test: blockdev writev readv block ...passed 00:18:16.533 Test: blockdev writev readv size > 128k ...passed 00:18:16.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:16.533 Test: blockdev comparev and writev ...[2024-11-20 12:27:59.424715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.424743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.424757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.424765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.424998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.425016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.425028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.425035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.425271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.425281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.425292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.425299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.425522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.425531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.425543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:16.533 [2024-11-20 12:27:59.425551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:16.533 passed 00:18:16.533 Test: blockdev nvme passthru rw ...passed 00:18:16.533 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:27:59.508392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.533 [2024-11-20 12:27:59.508409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.508519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.533 [2024-11-20 12:27:59.508528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.508634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.533 [2024-11-20 12:27:59.508643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:16.533 [2024-11-20 12:27:59.508746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:16.533 [2024-11-20 12:27:59.508755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:16.533 passed 00:18:16.533 Test: blockdev nvme admin passthru ...passed 00:18:16.533 Test: blockdev copy ...passed 00:18:16.533 00:18:16.533 Run Summary: Type Total Ran Passed Failed Inactive 00:18:16.533 suites 1 1 n/a 0 0 00:18:16.533 tests 23 23 23 0 0 00:18:16.533 asserts 152 152 152 0 n/a 00:18:16.533 00:18:16.534 Elapsed time = 1.084 seconds 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.792 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.792 rmmod nvme_tcp 00:18:16.792 rmmod nvme_fabrics 00:18:16.792 rmmod nvme_keyring 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 450716 ']' 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 450716 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 450716 ']' 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 450716 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 450716 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 450716' 00:18:17.051 killing process with pid 450716 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 450716 00:18:17.051 12:27:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 450716 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.311 12:28:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:19.847 00:18:19.847 real 0m10.978s 00:18:19.847 user 0m14.345s 00:18:19.847 sys 0m5.383s 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:19.847 ************************************ 00:18:19.847 END TEST nvmf_bdevio_no_huge 00:18:19.847 ************************************ 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.847 ************************************ 00:18:19.847 START TEST nvmf_tls 00:18:19.847 ************************************ 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:19.847 * Looking for test storage... 00:18:19.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:19.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.847 --rc genhtml_branch_coverage=1 00:18:19.847 --rc genhtml_function_coverage=1 00:18:19.847 --rc genhtml_legend=1 00:18:19.847 --rc geninfo_all_blocks=1 00:18:19.847 --rc geninfo_unexecuted_blocks=1 00:18:19.847 00:18:19.847 ' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:19.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.847 --rc genhtml_branch_coverage=1 00:18:19.847 --rc genhtml_function_coverage=1 00:18:19.847 --rc genhtml_legend=1 00:18:19.847 --rc geninfo_all_blocks=1 00:18:19.847 --rc geninfo_unexecuted_blocks=1 00:18:19.847 00:18:19.847 ' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:19.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.847 --rc genhtml_branch_coverage=1 00:18:19.847 --rc genhtml_function_coverage=1 00:18:19.847 --rc genhtml_legend=1 00:18:19.847 --rc geninfo_all_blocks=1 00:18:19.847 --rc geninfo_unexecuted_blocks=1 00:18:19.847 00:18:19.847 ' 00:18:19.847 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:19.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.847 --rc genhtml_branch_coverage=1 00:18:19.847 --rc genhtml_function_coverage=1 00:18:19.847 --rc genhtml_legend=1 00:18:19.847 --rc geninfo_all_blocks=1 00:18:19.847 --rc geninfo_unexecuted_blocks=1 00:18:19.847 00:18:19.847 ' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:19.848 12:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:26.419 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:26.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:26.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:26.420 Found net devices under 0000:86:00.0: cvl_0_0 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:26.420 Found net devices under 0000:86:00.1: cvl_0_1 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:26.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:18:26.420 00:18:26.420 --- 10.0.0.2 ping statistics --- 00:18:26.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.420 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:18:26.420 00:18:26.420 --- 10.0.0.1 ping statistics --- 00:18:26.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.420 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=454554 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 454554 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 454554 ']' 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.420 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.420 [2024-11-20 12:28:08.622444] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:26.420 [2024-11-20 12:28:08.622488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.420 [2024-11-20 12:28:08.703431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.420 [2024-11-20 12:28:08.744691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.420 [2024-11-20 12:28:08.744730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.420 [2024-11-20 12:28:08.744737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.420 [2024-11-20 12:28:08.744743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.420 [2024-11-20 12:28:08.744748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.421 [2024-11-20 12:28:08.745330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:26.421 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:26.421 true 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.421 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:26.679 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:26.679 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:26.679 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:26.679 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.679 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:26.939 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:26.939 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:26.939 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:26.939 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:27.198 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:27.198 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:27.198 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:27.457 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.457 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:27.457 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:27.457 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:27.457 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:27.715 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:27.715 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:27.974 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.p6C9rXHAX9 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KlisWKYhZh 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.p6C9rXHAX9 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KlisWKYhZh 00:18:27.974 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:28.232 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:28.491 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.p6C9rXHAX9 00:18:28.491 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.p6C9rXHAX9 00:18:28.491 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.750 [2024-11-20 12:28:11.736299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.750 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:29.008 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:29.008 [2024-11-20 12:28:12.101250] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.008 [2024-11-20 12:28:12.101451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.267 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.267 malloc0 00:18:29.267 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.526 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.p6C9rXHAX9 00:18:29.784 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.784 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.p6C9rXHAX9 00:18:41.985 Initializing NVMe Controllers 00:18:41.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:41.985 Initialization complete. Launching workers. 00:18:41.985 ======================================================== 00:18:41.985 Latency(us) 00:18:41.985 Device Information : IOPS MiB/s Average min max 00:18:41.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16393.24 64.04 3904.13 853.82 5874.42 00:18:41.985 ======================================================== 00:18:41.985 Total : 16393.24 64.04 3904.13 853.82 5874.42 00:18:41.985 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p6C9rXHAX9 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p6C9rXHAX9 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=457086 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 457086 /var/tmp/bdevperf.sock 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 457086 ']' 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.985 [2024-11-20 12:28:23.072618] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:41.985 [2024-11-20 12:28:23.072670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457086 ] 00:18:41.985 [2024-11-20 12:28:23.149228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.985 [2024-11-20 12:28:23.188981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p6C9rXHAX9 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.985 [2024-11-20 12:28:23.651668] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.985 TLSTESTn1 00:18:41.985 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:41.985 Running I/O for 10 seconds... 00:18:42.921 5168.00 IOPS, 20.19 MiB/s [2024-11-20T11:28:27.046Z] 5318.00 IOPS, 20.77 MiB/s [2024-11-20T11:28:28.077Z] 5369.67 IOPS, 20.98 MiB/s [2024-11-20T11:28:29.014Z] 5403.25 IOPS, 21.11 MiB/s [2024-11-20T11:28:29.952Z] 5417.80 IOPS, 21.16 MiB/s [2024-11-20T11:28:30.889Z] 5412.00 IOPS, 21.14 MiB/s [2024-11-20T11:28:32.282Z] 5386.57 IOPS, 21.04 MiB/s [2024-11-20T11:28:33.219Z] 5393.00 IOPS, 21.07 MiB/s [2024-11-20T11:28:34.158Z] 5411.33 IOPS, 21.14 MiB/s [2024-11-20T11:28:34.158Z] 5426.70 IOPS, 21.20 MiB/s 00:18:51.042 Latency(us) 00:18:51.042 [2024-11-20T11:28:34.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.042 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.042 Verification LBA range: start 0x0 length 0x2000 00:18:51.042 TLSTESTn1 : 10.01 5431.70 21.22 0.00 0.00 23531.00 5299.87 44906.41 00:18:51.042 [2024-11-20T11:28:34.158Z] =================================================================================================================== 00:18:51.042 [2024-11-20T11:28:34.158Z] Total : 5431.70 21.22 0.00 0.00 23531.00 5299.87 44906.41 00:18:51.042 { 00:18:51.042 "results": [ 00:18:51.042 { 00:18:51.042 "job": "TLSTESTn1", 00:18:51.042 "core_mask": "0x4", 00:18:51.042 "workload": "verify", 00:18:51.042 "status": "finished", 00:18:51.042 "verify_range": { 00:18:51.042 "start": 0, 00:18:51.042 "length": 8192 00:18:51.042 }, 00:18:51.042 "queue_depth": 128, 00:18:51.042 "io_size": 4096, 00:18:51.042 "runtime": 10.014167, 00:18:51.042 "iops": 5431.7049036629805, 00:18:51.042 "mibps": 21.217597279933518, 00:18:51.042 "io_failed": 0, 00:18:51.042 "io_timeout": 0, 00:18:51.042 "avg_latency_us": 23531.00169014805, 00:18:51.042 "min_latency_us": 5299.8678260869565, 00:18:51.042 "max_latency_us": 44906.406956521736 00:18:51.042 } 00:18:51.042 ], 00:18:51.042 "core_count": 1 00:18:51.042 } 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 457086 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 457086 ']' 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 457086 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 457086 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 457086' 00:18:51.042 killing process with pid 457086 00:18:51.042 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 457086 00:18:51.042 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.042 00:18:51.042 Latency(us) 00:18:51.042 [2024-11-20T11:28:34.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.042 [2024-11-20T11:28:34.158Z] =================================================================================================================== 00:18:51.042 [2024-11-20T11:28:34.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.043 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 457086 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KlisWKYhZh 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KlisWKYhZh 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KlisWKYhZh 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KlisWKYhZh 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=458920 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 458920 /var/tmp/bdevperf.sock 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 458920 ']' 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.043 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.302 [2024-11-20 12:28:34.164384] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:51.302 [2024-11-20 12:28:34.164437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458920 ] 00:18:51.302 [2024-11-20 12:28:34.234294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.302 [2024-11-20 12:28:34.272726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.302 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.302 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.302 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KlisWKYhZh 00:18:51.561 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.821 [2024-11-20 12:28:34.731450] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.821 [2024-11-20 12:28:34.736065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:51.821 [2024-11-20 12:28:34.736752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bb170 (107): Transport endpoint is not connected 00:18:51.821 [2024-11-20 12:28:34.737744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bb170 (9): Bad file descriptor 00:18:51.821 [2024-11-20 12:28:34.738746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:51.821 [2024-11-20 12:28:34.738754] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:51.821 [2024-11-20 12:28:34.738761] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:51.821 [2024-11-20 12:28:34.738771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:51.821 request: 00:18:51.821 { 00:18:51.821 "name": "TLSTEST", 00:18:51.821 "trtype": "tcp", 00:18:51.821 "traddr": "10.0.0.2", 00:18:51.821 "adrfam": "ipv4", 00:18:51.821 "trsvcid": "4420", 00:18:51.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.821 "prchk_reftag": false, 00:18:51.821 "prchk_guard": false, 00:18:51.821 "hdgst": false, 00:18:51.821 "ddgst": false, 00:18:51.821 "psk": "key0", 00:18:51.821 "allow_unrecognized_csi": false, 00:18:51.821 "method": "bdev_nvme_attach_controller", 00:18:51.821 "req_id": 1 00:18:51.821 } 00:18:51.821 Got JSON-RPC error response 00:18:51.821 response: 00:18:51.821 { 00:18:51.821 "code": -5, 00:18:51.821 "message": "Input/output error" 00:18:51.821 } 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 458920 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 458920 ']' 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 458920 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458920 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458920' 00:18:51.821 killing process with pid 458920 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 458920 00:18:51.821 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.821 00:18:51.821 Latency(us) 00:18:51.821 [2024-11-20T11:28:34.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.821 [2024-11-20T11:28:34.937Z] =================================================================================================================== 00:18:51.821 [2024-11-20T11:28:34.937Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.821 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 458920 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p6C9rXHAX9 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p6C9rXHAX9 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.p6C9rXHAX9 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p6C9rXHAX9 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=458947 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 458947 /var/tmp/bdevperf.sock 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 458947 ']' 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.081 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.081 [2024-11-20 12:28:35.017723] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:52.081 [2024-11-20 12:28:35.017771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458947 ] 00:18:52.081 [2024-11-20 12:28:35.082456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.081 [2024-11-20 12:28:35.119530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.340 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.340 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.340 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p6C9rXHAX9 00:18:52.340 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:52.600 [2024-11-20 12:28:35.602521] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.600 [2024-11-20 12:28:35.607228] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:52.600 [2024-11-20 12:28:35.607249] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:52.600 [2024-11-20 12:28:35.607276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:52.600 [2024-11-20 12:28:35.607925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x969170 (107): Transport endpoint is not connected 00:18:52.600 [2024-11-20 12:28:35.608918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x969170 (9): Bad file descriptor 00:18:52.600 [2024-11-20 12:28:35.609919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:52.600 [2024-11-20 12:28:35.609928] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:52.600 [2024-11-20 12:28:35.609935] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:52.600 [2024-11-20 12:28:35.609945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:52.600 request: 00:18:52.600 { 00:18:52.600 "name": "TLSTEST", 00:18:52.600 "trtype": "tcp", 00:18:52.600 "traddr": "10.0.0.2", 00:18:52.600 "adrfam": "ipv4", 00:18:52.600 "trsvcid": "4420", 00:18:52.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.600 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:52.600 "prchk_reftag": false, 00:18:52.600 "prchk_guard": false, 00:18:52.600 "hdgst": false, 00:18:52.600 "ddgst": false, 00:18:52.600 "psk": "key0", 00:18:52.600 "allow_unrecognized_csi": false, 00:18:52.600 "method": "bdev_nvme_attach_controller", 00:18:52.600 "req_id": 1 00:18:52.600 } 00:18:52.600 Got JSON-RPC error response 00:18:52.600 response: 00:18:52.600 { 00:18:52.600 "code": -5, 00:18:52.600 "message": "Input/output error" 00:18:52.600 } 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 458947 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 458947 ']' 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 458947 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 458947 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 458947' 00:18:52.600 killing process with pid 458947 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 458947 00:18:52.600 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.600 00:18:52.600 Latency(us) 00:18:52.600 [2024-11-20T11:28:35.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.600 [2024-11-20T11:28:35.716Z] =================================================================================================================== 00:18:52.600 [2024-11-20T11:28:35.716Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.600 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 458947 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p6C9rXHAX9 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p6C9rXHAX9 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.p6C9rXHAX9 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p6C9rXHAX9 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=459181 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 459181 /var/tmp/bdevperf.sock 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 459181 ']' 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.860 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.860 [2024-11-20 12:28:35.892252] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:52.860 [2024-11-20 12:28:35.892301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459181 ] 00:18:52.860 [2024-11-20 12:28:35.957051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.119 [2024-11-20 12:28:35.995366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.119 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.119 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.119 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p6C9rXHAX9 00:18:53.379 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:53.379 [2024-11-20 12:28:36.457959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.379 [2024-11-20 12:28:36.469449] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:53.379 [2024-11-20 12:28:36.469475] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:53.379 [2024-11-20 12:28:36.469497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:53.379 [2024-11-20 12:28:36.470396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe4170 (107): Transport endpoint is not connected 00:18:53.379 [2024-11-20 12:28:36.471390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe4170 (9): Bad file descriptor 00:18:53.379 [2024-11-20 12:28:36.472391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:53.379 [2024-11-20 12:28:36.472400] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:53.379 [2024-11-20 12:28:36.472407] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:53.379 [2024-11-20 12:28:36.472417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:53.379 request: 00:18:53.379 { 00:18:53.379 "name": "TLSTEST", 00:18:53.379 "trtype": "tcp", 00:18:53.379 "traddr": "10.0.0.2", 00:18:53.379 "adrfam": "ipv4", 00:18:53.379 "trsvcid": "4420", 00:18:53.379 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:53.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.379 "prchk_reftag": false, 00:18:53.379 "prchk_guard": false, 00:18:53.379 "hdgst": false, 00:18:53.379 "ddgst": false, 00:18:53.379 "psk": "key0", 00:18:53.379 "allow_unrecognized_csi": false, 00:18:53.379 "method": "bdev_nvme_attach_controller", 00:18:53.379 "req_id": 1 00:18:53.379 } 00:18:53.379 Got JSON-RPC error response 00:18:53.379 response: 00:18:53.379 { 00:18:53.379 "code": -5, 00:18:53.379 "message": "Input/output error" 00:18:53.379 } 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 459181 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 459181 ']' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 459181 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459181 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459181' 00:18:53.639 killing process with pid 459181 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 459181 00:18:53.639 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.639 00:18:53.639 Latency(us) 00:18:53.639 [2024-11-20T11:28:36.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.639 [2024-11-20T11:28:36.755Z] =================================================================================================================== 00:18:53.639 [2024-11-20T11:28:36.755Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 459181 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=459271 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 459271 /var/tmp/bdevperf.sock 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 459271 ']' 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.639 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.639 [2024-11-20 12:28:36.736832] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:53.639 [2024-11-20 12:28:36.736883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459271 ] 00:18:53.898 [2024-11-20 12:28:36.814194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.898 [2024-11-20 12:28:36.854488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.898 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.898 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.898 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:54.157 [2024-11-20 12:28:37.117533] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:54.157 [2024-11-20 12:28:37.117566] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:54.157 request: 00:18:54.157 { 00:18:54.157 "name": "key0", 00:18:54.157 "path": "", 00:18:54.157 "method": "keyring_file_add_key", 00:18:54.157 "req_id": 1 00:18:54.157 } 00:18:54.157 Got JSON-RPC error response 00:18:54.157 response: 00:18:54.157 { 00:18:54.157 "code": -1, 00:18:54.157 "message": "Operation not permitted" 00:18:54.157 } 00:18:54.157 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.417 [2024-11-20 12:28:37.314134] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.417 [2024-11-20 12:28:37.314158] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:54.417 request: 00:18:54.417 { 00:18:54.417 "name": "TLSTEST", 00:18:54.417 "trtype": "tcp", 00:18:54.417 "traddr": "10.0.0.2", 00:18:54.417 "adrfam": "ipv4", 00:18:54.417 "trsvcid": "4420", 00:18:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.417 "prchk_reftag": false, 00:18:54.417 "prchk_guard": false, 00:18:54.417 "hdgst": false, 00:18:54.417 "ddgst": false, 00:18:54.417 "psk": "key0", 00:18:54.417 "allow_unrecognized_csi": false, 00:18:54.417 "method": "bdev_nvme_attach_controller", 00:18:54.417 "req_id": 1 00:18:54.417 } 00:18:54.417 Got JSON-RPC error response 00:18:54.417 response: 00:18:54.417 { 00:18:54.417 "code": -126, 00:18:54.417 "message": "Required key not available" 00:18:54.417 } 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 459271 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 459271 ']' 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 459271 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459271 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459271' 00:18:54.417 killing process with pid 459271 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 459271 00:18:54.417 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.417 00:18:54.417 Latency(us) 00:18:54.417 [2024-11-20T11:28:37.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.417 [2024-11-20T11:28:37.533Z] =================================================================================================================== 00:18:54.417 [2024-11-20T11:28:37.533Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.417 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 459271 00:18:54.677 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:54.677 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:54.677 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.677 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.677 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 454554 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 454554 ']' 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 454554 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 454554 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 454554' 00:18:54.678 killing process with pid 454554 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 454554 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 454554 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:54.678 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.6x3LmvrJxv 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.6x3LmvrJxv 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=459446 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 459446 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 459446 ']' 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.938 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.938 [2024-11-20 12:28:37.867786] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:54.938 [2024-11-20 12:28:37.867838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.938 [2024-11-20 12:28:37.949871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.938 [2024-11-20 12:28:37.988075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.938 [2024-11-20 12:28:37.988110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.938 [2024-11-20 12:28:37.988118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.938 [2024-11-20 12:28:37.988127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.938 [2024-11-20 12:28:37.988133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.938 [2024-11-20 12:28:37.988684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.6x3LmvrJxv 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6x3LmvrJxv 00:18:55.197 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.197 [2024-11-20 12:28:38.305322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.456 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.456 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.715 [2024-11-20 12:28:38.674264] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.715 [2024-11-20 12:28:38.674470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.715 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.974 malloc0 00:18:55.974 12:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:55.974 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:18:56.234 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6x3LmvrJxv 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6x3LmvrJxv 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=459746 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 459746 /var/tmp/bdevperf.sock 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 459746 ']' 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.493 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.493 [2024-11-20 12:28:39.495125] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:18:56.493 [2024-11-20 12:28:39.495173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459746 ] 00:18:56.493 [2024-11-20 12:28:39.570775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.752 [2024-11-20 12:28:39.613032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.752 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.752 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:56.752 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:18:57.012 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:57.012 [2024-11-20 12:28:40.064422] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.271 TLSTESTn1 00:18:57.271 12:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:57.271 Running I/O for 10 seconds... 00:18:59.144 5026.00 IOPS, 19.63 MiB/s [2024-11-20T11:28:43.638Z] 5251.50 IOPS, 20.51 MiB/s [2024-11-20T11:28:44.574Z] 5328.00 IOPS, 20.81 MiB/s [2024-11-20T11:28:45.511Z] 5336.50 IOPS, 20.85 MiB/s [2024-11-20T11:28:46.448Z] 5371.00 IOPS, 20.98 MiB/s [2024-11-20T11:28:47.386Z] 5401.33 IOPS, 21.10 MiB/s [2024-11-20T11:28:48.323Z] 5419.86 IOPS, 21.17 MiB/s [2024-11-20T11:28:49.701Z] 5409.88 IOPS, 21.13 MiB/s [2024-11-20T11:28:50.638Z] 5423.22 IOPS, 21.18 MiB/s [2024-11-20T11:28:50.638Z] 5426.90 IOPS, 21.20 MiB/s 00:19:07.522 Latency(us) 00:19:07.522 [2024-11-20T11:28:50.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.522 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:07.522 Verification LBA range: start 0x0 length 0x2000 00:19:07.522 TLSTESTn1 : 10.01 5432.70 21.22 0.00 0.00 23527.20 5185.89 53340.61 00:19:07.522 [2024-11-20T11:28:50.638Z] =================================================================================================================== 00:19:07.522 [2024-11-20T11:28:50.638Z] Total : 5432.70 21.22 0.00 0.00 23527.20 5185.89 53340.61 00:19:07.522 { 00:19:07.522 "results": [ 00:19:07.522 { 00:19:07.522 "job": "TLSTESTn1", 00:19:07.522 "core_mask": "0x4", 00:19:07.522 "workload": "verify", 00:19:07.522 "status": "finished", 00:19:07.522 "verify_range": { 00:19:07.522 "start": 0, 00:19:07.522 "length": 8192 00:19:07.522 }, 00:19:07.522 "queue_depth": 128, 00:19:07.522 "io_size": 4096, 00:19:07.522 "runtime": 10.012515, 00:19:07.522 "iops": 5432.700974730125, 00:19:07.522 "mibps": 21.221488182539552, 00:19:07.522 "io_failed": 0, 00:19:07.522 "io_timeout": 0, 00:19:07.522 "avg_latency_us": 23527.20058509214, 00:19:07.522 "min_latency_us": 5185.892173913044, 00:19:07.522 "max_latency_us": 53340.605217391305 00:19:07.522 } 00:19:07.522 ], 00:19:07.522 "core_count": 1 00:19:07.522 } 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 459746 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 459746 ']' 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 459746 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459746 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459746' 00:19:07.522 killing process with pid 459746 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 459746 00:19:07.522 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.522 00:19:07.522 Latency(us) 00:19:07.522 [2024-11-20T11:28:50.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.522 [2024-11-20T11:28:50.638Z] =================================================================================================================== 00:19:07.522 [2024-11-20T11:28:50.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.522 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 459746 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.6x3LmvrJxv 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6x3LmvrJxv 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6x3LmvrJxv 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6x3LmvrJxv 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6x3LmvrJxv 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=461537 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 461537 /var/tmp/bdevperf.sock 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 461537 ']' 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.523 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.523 [2024-11-20 12:28:50.572318] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:07.523 [2024-11-20 12:28:50.572366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461537 ] 00:19:07.782 [2024-11-20 12:28:50.647907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.782 [2024-11-20 12:28:50.690265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.782 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.782 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.782 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:08.042 [2024-11-20 12:28:50.962415] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6x3LmvrJxv': 0100666 00:19:08.042 [2024-11-20 12:28:50.962443] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:08.042 request: 00:19:08.042 { 00:19:08.042 "name": "key0", 00:19:08.042 "path": "/tmp/tmp.6x3LmvrJxv", 00:19:08.042 "method": "keyring_file_add_key", 00:19:08.042 "req_id": 1 00:19:08.042 } 00:19:08.042 Got JSON-RPC error response 00:19:08.042 response: 00:19:08.042 { 00:19:08.042 "code": -1, 00:19:08.042 "message": "Operation not permitted" 00:19:08.042 } 00:19:08.042 12:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.042 [2024-11-20 12:28:51.142964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.042 [2024-11-20 12:28:51.142990] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:08.042 request: 00:19:08.042 { 00:19:08.042 "name": "TLSTEST", 00:19:08.042 "trtype": "tcp", 00:19:08.042 "traddr": "10.0.0.2", 00:19:08.042 "adrfam": "ipv4", 00:19:08.042 "trsvcid": "4420", 00:19:08.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.042 "prchk_reftag": false, 00:19:08.042 "prchk_guard": false, 00:19:08.042 "hdgst": false, 00:19:08.042 "ddgst": false, 00:19:08.042 "psk": "key0", 00:19:08.042 "allow_unrecognized_csi": false, 00:19:08.042 "method": "bdev_nvme_attach_controller", 00:19:08.042 "req_id": 1 00:19:08.042 } 00:19:08.042 Got JSON-RPC error response 00:19:08.042 response: 00:19:08.042 { 00:19:08.042 "code": -126, 00:19:08.042 "message": "Required key not available" 00:19:08.042 } 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 461537 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 461537 ']' 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 461537 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 461537 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:08.301 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 461537' 00:19:08.302 killing process with pid 461537 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 461537 00:19:08.302 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.302 00:19:08.302 Latency(us) 00:19:08.302 [2024-11-20T11:28:51.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.302 [2024-11-20T11:28:51.418Z] =================================================================================================================== 00:19:08.302 [2024-11-20T11:28:51.418Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 461537 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 459446 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 459446 ']' 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 459446 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 459446 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 459446' 00:19:08.302 killing process with pid 459446 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 459446 00:19:08.302 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 459446 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=461775 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 461775 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 461775 ']' 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.562 12:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.562 [2024-11-20 12:28:51.648143] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:08.562 [2024-11-20 12:28:51.648193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.821 [2024-11-20 12:28:51.727413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.821 [2024-11-20 12:28:51.768538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.821 [2024-11-20 12:28:51.768573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.821 [2024-11-20 12:28:51.768580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.821 [2024-11-20 12:28:51.768586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.821 [2024-11-20 12:28:51.768591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.821 [2024-11-20 12:28:51.769194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.389 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.389 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.389 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.389 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.389 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.6x3LmvrJxv 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6x3LmvrJxv 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.6x3LmvrJxv 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6x3LmvrJxv 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.648 [2024-11-20 12:28:52.698995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.648 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:09.907 12:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:10.166 [2024-11-20 12:28:53.087997] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:10.166 [2024-11-20 12:28:53.088185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.166 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:10.425 malloc0 00:19:10.425 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.425 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:10.683 [2024-11-20 12:28:53.689665] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6x3LmvrJxv': 0100666 00:19:10.683 [2024-11-20 12:28:53.689697] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:10.683 request: 00:19:10.683 { 00:19:10.683 "name": "key0", 00:19:10.683 "path": "/tmp/tmp.6x3LmvrJxv", 00:19:10.683 "method": "keyring_file_add_key", 00:19:10.683 "req_id": 1 00:19:10.683 } 00:19:10.683 Got JSON-RPC error response 00:19:10.683 response: 00:19:10.683 { 00:19:10.683 "code": -1, 00:19:10.683 "message": "Operation not permitted" 00:19:10.683 } 00:19:10.683 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.985 [2024-11-20 12:28:53.902242] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:10.985 [2024-11-20 12:28:53.902279] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:10.985 request: 00:19:10.985 { 00:19:10.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.985 "host": "nqn.2016-06.io.spdk:host1", 00:19:10.985 "psk": "key0", 00:19:10.985 "method": "nvmf_subsystem_add_host", 00:19:10.985 "req_id": 1 00:19:10.985 } 00:19:10.985 Got JSON-RPC error response 00:19:10.985 response: 00:19:10.985 { 00:19:10.985 "code": -32603, 00:19:10.985 "message": "Internal error" 00:19:10.985 } 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 461775 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 461775 ']' 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 461775 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 461775 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 461775' 00:19:10.985 killing process with pid 461775 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 461775 00:19:10.985 12:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 461775 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.6x3LmvrJxv 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=462264 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 462264 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 462264 ']' 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.284 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.284 [2024-11-20 12:28:54.211502] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:11.284 [2024-11-20 12:28:54.211547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.284 [2024-11-20 12:28:54.291209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.284 [2024-11-20 12:28:54.332119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.284 [2024-11-20 12:28:54.332152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.285 [2024-11-20 12:28:54.332160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.285 [2024-11-20 12:28:54.332165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.285 [2024-11-20 12:28:54.332170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.285 [2024-11-20 12:28:54.332729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.6x3LmvrJxv 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6x3LmvrJxv 00:19:11.543 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.543 [2024-11-20 12:28:54.632612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.802 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.802 12:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:12.061 [2024-11-20 12:28:55.041661] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:12.061 [2024-11-20 12:28:55.041847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.061 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.320 malloc0 00:19:12.320 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.579 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:12.579 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=462526 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 462526 /var/tmp/bdevperf.sock 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 462526 ']' 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.839 12:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.839 [2024-11-20 12:28:55.924985] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:12.839 [2024-11-20 12:28:55.925036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462526 ] 00:19:13.101 [2024-11-20 12:28:56.001717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.101 [2024-11-20 12:28:56.044703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.101 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.101 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:13.101 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:13.360 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:13.619 [2024-11-20 12:28:56.516594] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.619 TLSTESTn1 00:19:13.619 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:13.878 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:13.878 "subsystems": [ 00:19:13.878 { 00:19:13.878 "subsystem": "keyring", 00:19:13.878 "config": [ 00:19:13.878 { 00:19:13.878 "method": "keyring_file_add_key", 00:19:13.878 "params": { 00:19:13.878 "name": "key0", 00:19:13.878 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:13.878 } 00:19:13.878 } 00:19:13.878 ] 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "subsystem": "iobuf", 00:19:13.878 "config": [ 00:19:13.878 { 00:19:13.878 "method": "iobuf_set_options", 00:19:13.878 "params": { 00:19:13.878 "small_pool_count": 8192, 00:19:13.878 "large_pool_count": 1024, 00:19:13.878 "small_bufsize": 8192, 00:19:13.878 "large_bufsize": 135168, 00:19:13.878 "enable_numa": false 00:19:13.878 } 00:19:13.878 } 00:19:13.878 ] 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "subsystem": "sock", 00:19:13.878 "config": [ 00:19:13.878 { 00:19:13.878 "method": "sock_set_default_impl", 00:19:13.878 "params": { 00:19:13.878 "impl_name": "posix" 00:19:13.878 } 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "method": "sock_impl_set_options", 00:19:13.878 "params": { 00:19:13.878 "impl_name": "ssl", 00:19:13.878 "recv_buf_size": 4096, 00:19:13.878 "send_buf_size": 4096, 00:19:13.878 "enable_recv_pipe": true, 00:19:13.878 "enable_quickack": false, 00:19:13.878 "enable_placement_id": 0, 00:19:13.878 "enable_zerocopy_send_server": true, 00:19:13.878 "enable_zerocopy_send_client": false, 00:19:13.878 "zerocopy_threshold": 0, 00:19:13.878 "tls_version": 0, 00:19:13.878 "enable_ktls": false 00:19:13.878 } 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "method": "sock_impl_set_options", 00:19:13.878 "params": { 00:19:13.878 "impl_name": "posix", 00:19:13.878 "recv_buf_size": 2097152, 00:19:13.878 "send_buf_size": 2097152, 00:19:13.878 "enable_recv_pipe": true, 00:19:13.878 "enable_quickack": false, 00:19:13.878 "enable_placement_id": 0, 00:19:13.878 "enable_zerocopy_send_server": true, 00:19:13.878 "enable_zerocopy_send_client": false, 00:19:13.878 "zerocopy_threshold": 0, 00:19:13.878 "tls_version": 0, 00:19:13.878 "enable_ktls": false 00:19:13.878 } 00:19:13.878 } 00:19:13.878 ] 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "subsystem": "vmd", 00:19:13.878 "config": [] 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "subsystem": "accel", 00:19:13.878 "config": [ 00:19:13.878 { 00:19:13.878 "method": "accel_set_options", 00:19:13.878 "params": { 00:19:13.878 "small_cache_size": 128, 00:19:13.878 "large_cache_size": 16, 00:19:13.878 "task_count": 2048, 00:19:13.878 "sequence_count": 2048, 00:19:13.878 "buf_count": 2048 00:19:13.878 } 00:19:13.878 } 00:19:13.878 ] 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "subsystem": "bdev", 00:19:13.878 "config": [ 00:19:13.878 { 00:19:13.878 "method": "bdev_set_options", 00:19:13.878 "params": { 00:19:13.878 "bdev_io_pool_size": 65535, 00:19:13.878 "bdev_io_cache_size": 256, 00:19:13.878 "bdev_auto_examine": true, 00:19:13.878 "iobuf_small_cache_size": 128, 00:19:13.878 "iobuf_large_cache_size": 16 00:19:13.878 } 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "method": "bdev_raid_set_options", 00:19:13.878 "params": { 00:19:13.878 "process_window_size_kb": 1024, 00:19:13.878 "process_max_bandwidth_mb_sec": 0 00:19:13.878 } 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "method": "bdev_iscsi_set_options", 00:19:13.878 "params": { 00:19:13.878 "timeout_sec": 30 00:19:13.878 } 00:19:13.878 }, 00:19:13.878 { 00:19:13.878 "method": "bdev_nvme_set_options", 00:19:13.878 "params": { 00:19:13.879 "action_on_timeout": "none", 00:19:13.879 "timeout_us": 0, 00:19:13.879 "timeout_admin_us": 0, 00:19:13.879 "keep_alive_timeout_ms": 10000, 00:19:13.879 "arbitration_burst": 0, 00:19:13.879 "low_priority_weight": 0, 00:19:13.879 "medium_priority_weight": 0, 00:19:13.879 "high_priority_weight": 0, 00:19:13.879 "nvme_adminq_poll_period_us": 10000, 00:19:13.879 "nvme_ioq_poll_period_us": 0, 00:19:13.879 "io_queue_requests": 0, 00:19:13.879 "delay_cmd_submit": true, 00:19:13.879 "transport_retry_count": 4, 00:19:13.879 "bdev_retry_count": 3, 00:19:13.879 "transport_ack_timeout": 0, 00:19:13.879 "ctrlr_loss_timeout_sec": 0, 00:19:13.879 "reconnect_delay_sec": 0, 00:19:13.879 "fast_io_fail_timeout_sec": 0, 00:19:13.879 "disable_auto_failback": false, 00:19:13.879 "generate_uuids": false, 00:19:13.879 "transport_tos": 0, 00:19:13.879 "nvme_error_stat": false, 00:19:13.879 "rdma_srq_size": 0, 00:19:13.879 "io_path_stat": false, 00:19:13.879 "allow_accel_sequence": false, 00:19:13.879 "rdma_max_cq_size": 0, 00:19:13.879 "rdma_cm_event_timeout_ms": 0, 00:19:13.879 "dhchap_digests": [ 00:19:13.879 "sha256", 00:19:13.879 "sha384", 00:19:13.879 "sha512" 00:19:13.879 ], 00:19:13.879 "dhchap_dhgroups": [ 00:19:13.879 "null", 00:19:13.879 "ffdhe2048", 00:19:13.879 "ffdhe3072", 00:19:13.879 "ffdhe4096", 00:19:13.879 "ffdhe6144", 00:19:13.879 "ffdhe8192" 00:19:13.879 ] 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "bdev_nvme_set_hotplug", 00:19:13.879 "params": { 00:19:13.879 "period_us": 100000, 00:19:13.879 "enable": false 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "bdev_malloc_create", 00:19:13.879 "params": { 00:19:13.879 "name": "malloc0", 00:19:13.879 "num_blocks": 8192, 00:19:13.879 "block_size": 4096, 00:19:13.879 "physical_block_size": 4096, 00:19:13.879 "uuid": "ef9cbf89-e2e8-4546-9261-c2ce9509b479", 00:19:13.879 "optimal_io_boundary": 0, 00:19:13.879 "md_size": 0, 00:19:13.879 "dif_type": 0, 00:19:13.879 "dif_is_head_of_md": false, 00:19:13.879 "dif_pi_format": 0 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "bdev_wait_for_examine" 00:19:13.879 } 00:19:13.879 ] 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "subsystem": "nbd", 00:19:13.879 "config": [] 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "subsystem": "scheduler", 00:19:13.879 "config": [ 00:19:13.879 { 00:19:13.879 "method": "framework_set_scheduler", 00:19:13.879 "params": { 00:19:13.879 "name": "static" 00:19:13.879 } 00:19:13.879 } 00:19:13.879 ] 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "subsystem": "nvmf", 00:19:13.879 "config": [ 00:19:13.879 { 00:19:13.879 "method": "nvmf_set_config", 00:19:13.879 "params": { 00:19:13.879 "discovery_filter": "match_any", 00:19:13.879 "admin_cmd_passthru": { 00:19:13.879 "identify_ctrlr": false 00:19:13.879 }, 00:19:13.879 "dhchap_digests": [ 00:19:13.879 "sha256", 00:19:13.879 "sha384", 00:19:13.879 "sha512" 00:19:13.879 ], 00:19:13.879 "dhchap_dhgroups": [ 00:19:13.879 "null", 00:19:13.879 "ffdhe2048", 00:19:13.879 "ffdhe3072", 00:19:13.879 "ffdhe4096", 00:19:13.879 "ffdhe6144", 00:19:13.879 "ffdhe8192" 00:19:13.879 ] 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_set_max_subsystems", 00:19:13.879 "params": { 00:19:13.879 "max_subsystems": 1024 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_set_crdt", 00:19:13.879 "params": { 00:19:13.879 "crdt1": 0, 00:19:13.879 "crdt2": 0, 00:19:13.879 "crdt3": 0 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_create_transport", 00:19:13.879 "params": { 00:19:13.879 "trtype": "TCP", 00:19:13.879 "max_queue_depth": 128, 00:19:13.879 "max_io_qpairs_per_ctrlr": 127, 00:19:13.879 "in_capsule_data_size": 4096, 00:19:13.879 "max_io_size": 131072, 00:19:13.879 "io_unit_size": 131072, 00:19:13.879 "max_aq_depth": 128, 00:19:13.879 "num_shared_buffers": 511, 00:19:13.879 "buf_cache_size": 4294967295, 00:19:13.879 "dif_insert_or_strip": false, 00:19:13.879 "zcopy": false, 00:19:13.879 "c2h_success": false, 00:19:13.879 "sock_priority": 0, 00:19:13.879 "abort_timeout_sec": 1, 00:19:13.879 "ack_timeout": 0, 00:19:13.879 "data_wr_pool_size": 0 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_create_subsystem", 00:19:13.879 "params": { 00:19:13.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.879 "allow_any_host": false, 00:19:13.879 "serial_number": "SPDK00000000000001", 00:19:13.879 "model_number": "SPDK bdev Controller", 00:19:13.879 "max_namespaces": 10, 00:19:13.879 "min_cntlid": 1, 00:19:13.879 "max_cntlid": 65519, 00:19:13.879 "ana_reporting": false 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_subsystem_add_host", 00:19:13.879 "params": { 00:19:13.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.879 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.879 "psk": "key0" 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_subsystem_add_ns", 00:19:13.879 "params": { 00:19:13.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.879 "namespace": { 00:19:13.879 "nsid": 1, 00:19:13.879 "bdev_name": "malloc0", 00:19:13.879 "nguid": "EF9CBF89E2E845469261C2CE9509B479", 00:19:13.879 "uuid": "ef9cbf89-e2e8-4546-9261-c2ce9509b479", 00:19:13.879 "no_auto_visible": false 00:19:13.879 } 00:19:13.879 } 00:19:13.879 }, 00:19:13.879 { 00:19:13.879 "method": "nvmf_subsystem_add_listener", 00:19:13.879 "params": { 00:19:13.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.879 "listen_address": { 00:19:13.879 "trtype": "TCP", 00:19:13.879 "adrfam": "IPv4", 00:19:13.879 "traddr": "10.0.0.2", 00:19:13.879 "trsvcid": "4420" 00:19:13.879 }, 00:19:13.879 "secure_channel": true 00:19:13.879 } 00:19:13.879 } 00:19:13.879 ] 00:19:13.879 } 00:19:13.879 ] 00:19:13.879 }' 00:19:13.880 12:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:14.139 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:14.139 "subsystems": [ 00:19:14.139 { 00:19:14.139 "subsystem": "keyring", 00:19:14.139 "config": [ 00:19:14.139 { 00:19:14.139 "method": "keyring_file_add_key", 00:19:14.139 "params": { 00:19:14.139 "name": "key0", 00:19:14.139 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:14.139 } 00:19:14.139 } 00:19:14.139 ] 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "subsystem": "iobuf", 00:19:14.139 "config": [ 00:19:14.139 { 00:19:14.139 "method": "iobuf_set_options", 00:19:14.139 "params": { 00:19:14.139 "small_pool_count": 8192, 00:19:14.139 "large_pool_count": 1024, 00:19:14.139 "small_bufsize": 8192, 00:19:14.139 "large_bufsize": 135168, 00:19:14.139 "enable_numa": false 00:19:14.139 } 00:19:14.139 } 00:19:14.139 ] 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "subsystem": "sock", 00:19:14.139 "config": [ 00:19:14.139 { 00:19:14.139 "method": "sock_set_default_impl", 00:19:14.139 "params": { 00:19:14.139 "impl_name": "posix" 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "sock_impl_set_options", 00:19:14.139 "params": { 00:19:14.139 "impl_name": "ssl", 00:19:14.139 "recv_buf_size": 4096, 00:19:14.139 "send_buf_size": 4096, 00:19:14.139 "enable_recv_pipe": true, 00:19:14.139 "enable_quickack": false, 00:19:14.139 "enable_placement_id": 0, 00:19:14.139 "enable_zerocopy_send_server": true, 00:19:14.139 "enable_zerocopy_send_client": false, 00:19:14.139 "zerocopy_threshold": 0, 00:19:14.139 "tls_version": 0, 00:19:14.139 "enable_ktls": false 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "sock_impl_set_options", 00:19:14.139 "params": { 00:19:14.139 "impl_name": "posix", 00:19:14.139 "recv_buf_size": 2097152, 00:19:14.139 "send_buf_size": 2097152, 00:19:14.139 "enable_recv_pipe": true, 00:19:14.139 "enable_quickack": false, 00:19:14.139 "enable_placement_id": 0, 00:19:14.139 "enable_zerocopy_send_server": true, 00:19:14.139 "enable_zerocopy_send_client": false, 00:19:14.139 "zerocopy_threshold": 0, 00:19:14.139 "tls_version": 0, 00:19:14.139 "enable_ktls": false 00:19:14.139 } 00:19:14.139 } 00:19:14.139 ] 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "subsystem": "vmd", 00:19:14.139 "config": [] 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "subsystem": "accel", 00:19:14.139 "config": [ 00:19:14.139 { 00:19:14.139 "method": "accel_set_options", 00:19:14.139 "params": { 00:19:14.139 "small_cache_size": 128, 00:19:14.139 "large_cache_size": 16, 00:19:14.139 "task_count": 2048, 00:19:14.139 "sequence_count": 2048, 00:19:14.139 "buf_count": 2048 00:19:14.139 } 00:19:14.139 } 00:19:14.139 ] 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "subsystem": "bdev", 00:19:14.139 "config": [ 00:19:14.139 { 00:19:14.139 "method": "bdev_set_options", 00:19:14.139 "params": { 00:19:14.139 "bdev_io_pool_size": 65535, 00:19:14.139 "bdev_io_cache_size": 256, 00:19:14.139 "bdev_auto_examine": true, 00:19:14.139 "iobuf_small_cache_size": 128, 00:19:14.139 "iobuf_large_cache_size": 16 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "bdev_raid_set_options", 00:19:14.139 "params": { 00:19:14.139 "process_window_size_kb": 1024, 00:19:14.139 "process_max_bandwidth_mb_sec": 0 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "bdev_iscsi_set_options", 00:19:14.139 "params": { 00:19:14.139 "timeout_sec": 30 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "bdev_nvme_set_options", 00:19:14.139 "params": { 00:19:14.139 "action_on_timeout": "none", 00:19:14.139 "timeout_us": 0, 00:19:14.139 "timeout_admin_us": 0, 00:19:14.139 "keep_alive_timeout_ms": 10000, 00:19:14.139 "arbitration_burst": 0, 00:19:14.139 "low_priority_weight": 0, 00:19:14.139 "medium_priority_weight": 0, 00:19:14.139 "high_priority_weight": 0, 00:19:14.139 "nvme_adminq_poll_period_us": 10000, 00:19:14.139 "nvme_ioq_poll_period_us": 0, 00:19:14.139 "io_queue_requests": 512, 00:19:14.139 "delay_cmd_submit": true, 00:19:14.139 "transport_retry_count": 4, 00:19:14.139 "bdev_retry_count": 3, 00:19:14.139 "transport_ack_timeout": 0, 00:19:14.139 "ctrlr_loss_timeout_sec": 0, 00:19:14.139 "reconnect_delay_sec": 0, 00:19:14.139 "fast_io_fail_timeout_sec": 0, 00:19:14.139 "disable_auto_failback": false, 00:19:14.139 "generate_uuids": false, 00:19:14.139 "transport_tos": 0, 00:19:14.139 "nvme_error_stat": false, 00:19:14.139 "rdma_srq_size": 0, 00:19:14.139 "io_path_stat": false, 00:19:14.139 "allow_accel_sequence": false, 00:19:14.139 "rdma_max_cq_size": 0, 00:19:14.139 "rdma_cm_event_timeout_ms": 0, 00:19:14.139 "dhchap_digests": [ 00:19:14.139 "sha256", 00:19:14.139 "sha384", 00:19:14.139 "sha512" 00:19:14.139 ], 00:19:14.139 "dhchap_dhgroups": [ 00:19:14.139 "null", 00:19:14.139 "ffdhe2048", 00:19:14.139 "ffdhe3072", 00:19:14.139 "ffdhe4096", 00:19:14.139 "ffdhe6144", 00:19:14.139 "ffdhe8192" 00:19:14.139 ] 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "bdev_nvme_attach_controller", 00:19:14.139 "params": { 00:19:14.139 "name": "TLSTEST", 00:19:14.139 "trtype": "TCP", 00:19:14.139 "adrfam": "IPv4", 00:19:14.139 "traddr": "10.0.0.2", 00:19:14.139 "trsvcid": "4420", 00:19:14.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.139 "prchk_reftag": false, 00:19:14.139 "prchk_guard": false, 00:19:14.139 "ctrlr_loss_timeout_sec": 0, 00:19:14.139 "reconnect_delay_sec": 0, 00:19:14.139 "fast_io_fail_timeout_sec": 0, 00:19:14.139 "psk": "key0", 00:19:14.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.139 "hdgst": false, 00:19:14.139 "ddgst": false, 00:19:14.139 "multipath": "multipath" 00:19:14.139 } 00:19:14.139 }, 00:19:14.139 { 00:19:14.139 "method": "bdev_nvme_set_hotplug", 00:19:14.139 "params": { 00:19:14.139 "period_us": 100000, 00:19:14.140 "enable": false 00:19:14.140 } 00:19:14.140 }, 00:19:14.140 { 00:19:14.140 "method": "bdev_wait_for_examine" 00:19:14.140 } 00:19:14.140 ] 00:19:14.140 }, 00:19:14.140 { 00:19:14.140 "subsystem": "nbd", 00:19:14.140 "config": [] 00:19:14.140 } 00:19:14.140 ] 00:19:14.140 }' 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 462526 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 462526 ']' 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 462526 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462526 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462526' 00:19:14.140 killing process with pid 462526 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 462526 00:19:14.140 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.140 00:19:14.140 Latency(us) 00:19:14.140 [2024-11-20T11:28:57.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.140 [2024-11-20T11:28:57.256Z] =================================================================================================================== 00:19:14.140 [2024-11-20T11:28:57.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.140 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 462526 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 462264 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 462264 ']' 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 462264 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462264 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462264' 00:19:14.399 killing process with pid 462264 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 462264 00:19:14.399 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 462264 00:19:14.659 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:14.659 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.659 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.659 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:14.659 "subsystems": [ 00:19:14.659 { 00:19:14.659 "subsystem": "keyring", 00:19:14.659 "config": [ 00:19:14.659 { 00:19:14.659 "method": "keyring_file_add_key", 00:19:14.659 "params": { 00:19:14.659 "name": "key0", 00:19:14.659 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:14.659 } 00:19:14.659 } 00:19:14.659 ] 00:19:14.659 }, 00:19:14.659 { 00:19:14.659 "subsystem": "iobuf", 00:19:14.659 "config": [ 00:19:14.659 { 00:19:14.659 "method": "iobuf_set_options", 00:19:14.659 "params": { 00:19:14.659 "small_pool_count": 8192, 00:19:14.659 "large_pool_count": 1024, 00:19:14.659 "small_bufsize": 8192, 00:19:14.659 "large_bufsize": 135168, 00:19:14.659 "enable_numa": false 00:19:14.659 } 00:19:14.659 } 00:19:14.659 ] 00:19:14.659 }, 00:19:14.659 { 00:19:14.659 "subsystem": "sock", 00:19:14.659 "config": [ 00:19:14.659 { 00:19:14.659 "method": "sock_set_default_impl", 00:19:14.659 "params": { 00:19:14.659 "impl_name": "posix" 00:19:14.659 } 00:19:14.659 }, 00:19:14.659 { 00:19:14.659 "method": "sock_impl_set_options", 00:19:14.659 "params": { 00:19:14.659 "impl_name": "ssl", 00:19:14.659 "recv_buf_size": 4096, 00:19:14.659 "send_buf_size": 4096, 00:19:14.659 "enable_recv_pipe": true, 00:19:14.659 "enable_quickack": false, 00:19:14.659 "enable_placement_id": 0, 00:19:14.659 "enable_zerocopy_send_server": true, 00:19:14.659 "enable_zerocopy_send_client": false, 00:19:14.659 "zerocopy_threshold": 0, 00:19:14.659 "tls_version": 0, 00:19:14.659 "enable_ktls": false 00:19:14.659 } 00:19:14.659 }, 00:19:14.659 { 00:19:14.659 "method": "sock_impl_set_options", 00:19:14.660 "params": { 00:19:14.660 "impl_name": "posix", 00:19:14.660 "recv_buf_size": 2097152, 00:19:14.660 "send_buf_size": 2097152, 00:19:14.660 "enable_recv_pipe": true, 00:19:14.660 "enable_quickack": false, 00:19:14.660 "enable_placement_id": 0, 00:19:14.660 "enable_zerocopy_send_server": true, 00:19:14.660 "enable_zerocopy_send_client": false, 00:19:14.660 "zerocopy_threshold": 0, 00:19:14.660 "tls_version": 0, 00:19:14.660 "enable_ktls": false 00:19:14.660 } 00:19:14.660 } 00:19:14.660 ] 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "subsystem": "vmd", 00:19:14.660 "config": [] 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "subsystem": "accel", 00:19:14.660 "config": [ 00:19:14.660 { 00:19:14.660 "method": "accel_set_options", 00:19:14.660 "params": { 00:19:14.660 "small_cache_size": 128, 00:19:14.660 "large_cache_size": 16, 00:19:14.660 "task_count": 2048, 00:19:14.660 "sequence_count": 2048, 00:19:14.660 "buf_count": 2048 00:19:14.660 } 00:19:14.660 } 00:19:14.660 ] 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "subsystem": "bdev", 00:19:14.660 "config": [ 00:19:14.660 { 00:19:14.660 "method": "bdev_set_options", 00:19:14.660 "params": { 00:19:14.660 "bdev_io_pool_size": 65535, 00:19:14.660 "bdev_io_cache_size": 256, 00:19:14.660 "bdev_auto_examine": true, 00:19:14.660 "iobuf_small_cache_size": 128, 00:19:14.660 "iobuf_large_cache_size": 16 00:19:14.660 } 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "method": "bdev_raid_set_options", 00:19:14.660 "params": { 00:19:14.660 "process_window_size_kb": 1024, 00:19:14.660 "process_max_bandwidth_mb_sec": 0 00:19:14.660 } 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "method": "bdev_iscsi_set_options", 00:19:14.660 "params": { 00:19:14.660 "timeout_sec": 30 00:19:14.660 } 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "method": "bdev_nvme_set_options", 00:19:14.660 "params": { 00:19:14.660 "action_on_timeout": "none", 00:19:14.660 "timeout_us": 0, 00:19:14.660 "timeout_admin_us": 0, 00:19:14.660 "keep_alive_timeout_ms": 10000, 00:19:14.660 "arbitration_burst": 0, 00:19:14.660 "low_priority_weight": 0, 00:19:14.660 "medium_priority_weight": 0, 00:19:14.660 "high_priority_weight": 0, 00:19:14.660 "nvme_adminq_poll_period_us": 10000, 00:19:14.660 "nvme_ioq_poll_period_us": 0, 00:19:14.660 "io_queue_requests": 0, 00:19:14.660 "delay_cmd_submit": true, 00:19:14.660 "transport_retry_count": 4, 00:19:14.660 "bdev_retry_count": 3, 00:19:14.660 "transport_ack_timeout": 0, 00:19:14.660 "ctrlr_loss_timeout_sec": 0, 00:19:14.660 "reconnect_delay_sec": 0, 00:19:14.660 "fast_io_fail_timeout_sec": 0, 00:19:14.660 "disable_auto_failback": false, 00:19:14.660 "generate_uuids": false, 00:19:14.660 "transport_tos": 0, 00:19:14.660 "nvme_error_stat": false, 00:19:14.660 "rdma_srq_size": 0, 00:19:14.660 "io_path_stat": false, 00:19:14.660 "allow_accel_sequence": false, 00:19:14.660 "rdma_max_cq_size": 0, 00:19:14.660 "rdma_cm_event_timeout_ms": 0, 00:19:14.660 "dhchap_digests": [ 00:19:14.660 "sha256", 00:19:14.660 "sha384", 00:19:14.660 "sha512" 00:19:14.660 ], 00:19:14.660 "dhchap_dhgroups": [ 00:19:14.660 "null", 00:19:14.660 "ffdhe2048", 00:19:14.660 "ffdhe3072", 00:19:14.660 "ffdhe4096", 00:19:14.660 "ffdhe6144", 00:19:14.660 "ffdhe8192" 00:19:14.660 ] 00:19:14.660 } 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "method": "bdev_nvme_set_hotplug", 00:19:14.660 "params": { 00:19:14.660 "period_us": 100000, 00:19:14.660 "enable": false 00:19:14.660 } 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "method": "bdev_malloc_create", 00:19:14.660 "params": { 00:19:14.660 "name": "malloc0", 00:19:14.660 "num_blocks": 8192, 00:19:14.660 "block_size": 4096, 00:19:14.660 "physical_block_size": 4096, 00:19:14.660 "uuid": "ef9cbf89-e2e8-4546-9261-c2ce9509b479", 00:19:14.660 "optimal_io_boundary": 0, 00:19:14.660 "md_size": 0, 00:19:14.660 "dif_type": 0, 00:19:14.660 "dif_is_head_of_md": false, 00:19:14.660 "dif_pi_format": 0 00:19:14.660 } 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "method": "bdev_wait_for_examine" 00:19:14.660 } 00:19:14.660 ] 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "subsystem": "nbd", 00:19:14.660 "config": [] 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "subsystem": "scheduler", 00:19:14.660 "config": [ 00:19:14.660 { 00:19:14.660 "method": "framework_set_scheduler", 00:19:14.660 "params": { 00:19:14.660 "name": "static" 00:19:14.660 } 00:19:14.660 } 00:19:14.660 ] 00:19:14.660 }, 00:19:14.660 { 00:19:14.660 "subsystem": "nvmf", 00:19:14.660 "config": [ 00:19:14.660 { 00:19:14.660 "method": "nvmf_set_config", 00:19:14.660 "params": { 00:19:14.660 "discovery_filter": "match_any", 00:19:14.660 "admin_cmd_passthru": { 00:19:14.660 "identify_ctrlr": false 00:19:14.660 }, 00:19:14.660 "dhchap_digests": [ 00:19:14.660 "sha256", 00:19:14.660 "sha384", 00:19:14.660 "sha512" 00:19:14.660 ], 00:19:14.660 "dhchap_dhgroups": [ 00:19:14.660 "null", 00:19:14.660 "ffdhe2048", 00:19:14.660 "ffdhe3072", 00:19:14.660 "ffdhe4096", 00:19:14.660 "ffdhe6144", 00:19:14.660 "ffdhe8192" 00:19:14.661 ] 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_set_max_subsystems", 00:19:14.661 "params": { 00:19:14.661 "max_subsystems": 1024 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_set_crdt", 00:19:14.661 "params": { 00:19:14.661 "crdt1": 0, 00:19:14.661 "crdt2": 0, 00:19:14.661 "crdt3": 0 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_create_transport", 00:19:14.661 "params": { 00:19:14.661 "trtype": "TCP", 00:19:14.661 "max_queue_depth": 128, 00:19:14.661 "max_io_qpairs_per_ctrlr": 127, 00:19:14.661 "in_capsule_data_size": 4096, 00:19:14.661 "max_io_size": 131072, 00:19:14.661 "io_unit_size": 131072, 00:19:14.661 "max_aq_depth": 128, 00:19:14.661 "num_shared_buffers": 511, 00:19:14.661 "buf_cache_size": 4294967295, 00:19:14.661 "dif_insert_or_strip": false, 00:19:14.661 "zcopy": false, 00:19:14.661 "c2h_success": false, 00:19:14.661 "sock_priority": 0, 00:19:14.661 "abort_timeout_sec": 1, 00:19:14.661 "ack_timeout": 0, 00:19:14.661 "data_wr_pool_size": 0 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_create_subsystem", 00:19:14.661 "params": { 00:19:14.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.661 "allow_any_host": false, 00:19:14.661 "serial_number": "SPDK00000000000001", 00:19:14.661 "model_number": "SPDK bdev Controller", 00:19:14.661 "max_namespaces": 10, 00:19:14.661 "min_cntlid": 1, 00:19:14.661 "max_cntlid": 65519, 00:19:14.661 "ana_reporting": false 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_subsystem_add_host", 00:19:14.661 "params": { 00:19:14.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.661 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.661 "psk": "key0" 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_subsystem_add_ns", 00:19:14.661 "params": { 00:19:14.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.661 "namespace": { 00:19:14.661 "nsid": 1, 00:19:14.661 "bdev_name": "malloc0", 00:19:14.661 "nguid": "EF9CBF89E2E845469261C2CE9509B479", 00:19:14.661 "uuid": "ef9cbf89-e2e8-4546-9261-c2ce9509b479", 00:19:14.661 "no_auto_visible": false 00:19:14.661 } 00:19:14.661 } 00:19:14.661 }, 00:19:14.661 { 00:19:14.661 "method": "nvmf_subsystem_add_listener", 00:19:14.661 "params": { 00:19:14.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.661 "listen_address": { 00:19:14.661 "trtype": "TCP", 00:19:14.661 "adrfam": "IPv4", 00:19:14.661 "traddr": "10.0.0.2", 00:19:14.661 "trsvcid": "4420" 00:19:14.661 }, 00:19:14.661 "secure_channel": true 00:19:14.661 } 00:19:14.661 } 00:19:14.661 ] 00:19:14.661 } 00:19:14.661 ] 00:19:14.661 }' 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=462779 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 462779 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 462779 ']' 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.661 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.661 [2024-11-20 12:28:57.631355] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:14.661 [2024-11-20 12:28:57.631401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.661 [2024-11-20 12:28:57.708889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.661 [2024-11-20 12:28:57.749955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.661 [2024-11-20 12:28:57.749992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.661 [2024-11-20 12:28:57.749999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.661 [2024-11-20 12:28:57.750006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.661 [2024-11-20 12:28:57.750010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.661 [2024-11-20 12:28:57.750626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.920 [2024-11-20 12:28:57.963250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.920 [2024-11-20 12:28:57.995278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.920 [2024-11-20 12:28:57.995455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=463022 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 463022 /var/tmp/bdevperf.sock 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 463022 ']' 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.488 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:15.488 "subsystems": [ 00:19:15.488 { 00:19:15.488 "subsystem": "keyring", 00:19:15.488 "config": [ 00:19:15.488 { 00:19:15.488 "method": "keyring_file_add_key", 00:19:15.488 "params": { 00:19:15.488 "name": "key0", 00:19:15.488 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:15.488 } 00:19:15.488 } 00:19:15.488 ] 00:19:15.488 }, 00:19:15.488 { 00:19:15.488 "subsystem": "iobuf", 00:19:15.488 "config": [ 00:19:15.488 { 00:19:15.488 "method": "iobuf_set_options", 00:19:15.488 "params": { 00:19:15.488 "small_pool_count": 8192, 00:19:15.488 "large_pool_count": 1024, 00:19:15.488 "small_bufsize": 8192, 00:19:15.488 "large_bufsize": 135168, 00:19:15.488 "enable_numa": false 00:19:15.488 } 00:19:15.488 } 00:19:15.488 ] 00:19:15.488 }, 00:19:15.488 { 00:19:15.488 "subsystem": "sock", 00:19:15.488 "config": [ 00:19:15.488 { 00:19:15.488 "method": "sock_set_default_impl", 00:19:15.488 "params": { 00:19:15.488 "impl_name": "posix" 00:19:15.488 } 00:19:15.488 }, 00:19:15.488 { 00:19:15.488 "method": "sock_impl_set_options", 00:19:15.488 "params": { 00:19:15.488 "impl_name": "ssl", 00:19:15.488 "recv_buf_size": 4096, 00:19:15.488 "send_buf_size": 4096, 00:19:15.488 "enable_recv_pipe": true, 00:19:15.488 "enable_quickack": false, 00:19:15.488 "enable_placement_id": 0, 00:19:15.488 "enable_zerocopy_send_server": true, 00:19:15.488 "enable_zerocopy_send_client": false, 00:19:15.488 "zerocopy_threshold": 0, 00:19:15.488 "tls_version": 0, 00:19:15.488 "enable_ktls": false 00:19:15.488 } 00:19:15.488 }, 00:19:15.488 { 00:19:15.488 "method": "sock_impl_set_options", 00:19:15.488 "params": { 00:19:15.488 "impl_name": "posix", 00:19:15.488 "recv_buf_size": 2097152, 00:19:15.489 "send_buf_size": 2097152, 00:19:15.489 "enable_recv_pipe": true, 00:19:15.489 "enable_quickack": false, 00:19:15.489 "enable_placement_id": 0, 00:19:15.489 "enable_zerocopy_send_server": true, 00:19:15.489 "enable_zerocopy_send_client": false, 00:19:15.489 "zerocopy_threshold": 0, 00:19:15.489 "tls_version": 0, 00:19:15.489 "enable_ktls": false 00:19:15.489 } 00:19:15.489 } 00:19:15.489 ] 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "subsystem": "vmd", 00:19:15.489 "config": [] 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "subsystem": "accel", 00:19:15.489 "config": [ 00:19:15.489 { 00:19:15.489 "method": "accel_set_options", 00:19:15.489 "params": { 00:19:15.489 "small_cache_size": 128, 00:19:15.489 "large_cache_size": 16, 00:19:15.489 "task_count": 2048, 00:19:15.489 "sequence_count": 2048, 00:19:15.489 "buf_count": 2048 00:19:15.489 } 00:19:15.489 } 00:19:15.489 ] 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "subsystem": "bdev", 00:19:15.489 "config": [ 00:19:15.489 { 00:19:15.489 "method": "bdev_set_options", 00:19:15.489 "params": { 00:19:15.489 "bdev_io_pool_size": 65535, 00:19:15.489 "bdev_io_cache_size": 256, 00:19:15.489 "bdev_auto_examine": true, 00:19:15.489 "iobuf_small_cache_size": 128, 00:19:15.489 "iobuf_large_cache_size": 16 00:19:15.489 } 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "method": "bdev_raid_set_options", 00:19:15.489 "params": { 00:19:15.489 "process_window_size_kb": 1024, 00:19:15.489 "process_max_bandwidth_mb_sec": 0 00:19:15.489 } 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "method": "bdev_iscsi_set_options", 00:19:15.489 "params": { 00:19:15.489 "timeout_sec": 30 00:19:15.489 } 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "method": "bdev_nvme_set_options", 00:19:15.489 "params": { 00:19:15.489 "action_on_timeout": "none", 00:19:15.489 "timeout_us": 0, 00:19:15.489 "timeout_admin_us": 0, 00:19:15.489 "keep_alive_timeout_ms": 10000, 00:19:15.489 "arbitration_burst": 0, 00:19:15.489 "low_priority_weight": 0, 00:19:15.489 "medium_priority_weight": 0, 00:19:15.489 "high_priority_weight": 0, 00:19:15.489 "nvme_adminq_poll_period_us": 10000, 00:19:15.489 "nvme_ioq_poll_period_us": 0, 00:19:15.489 "io_queue_requests": 512, 00:19:15.489 "delay_cmd_submit": true, 00:19:15.489 "transport_retry_count": 4, 00:19:15.489 "bdev_retry_count": 3, 00:19:15.489 "transport_ack_timeout": 0, 00:19:15.489 "ctrlr_loss_timeout_sec": 0, 00:19:15.489 "reconnect_delay_sec": 0, 00:19:15.489 "fast_io_fail_timeout_sec": 0, 00:19:15.489 "disable_auto_failback": false, 00:19:15.489 "generate_uuids": false, 00:19:15.489 "transport_tos": 0, 00:19:15.489 "nvme_error_stat": false, 00:19:15.489 "rdma_srq_size": 0, 00:19:15.489 "io_path_stat": false, 00:19:15.489 "allow_accel_sequence": false, 00:19:15.489 "rdma_max_cq_size": 0, 00:19:15.489 "rdma_cm_event_timeout_ms": 0, 00:19:15.489 "dhchap_digests": [ 00:19:15.489 "sha256", 00:19:15.489 "sha384", 00:19:15.489 "sha512" 00:19:15.489 ], 00:19:15.489 "dhchap_dhgroups": [ 00:19:15.489 "null", 00:19:15.489 "ffdhe2048", 00:19:15.489 "ffdhe3072", 00:19:15.489 "ffdhe4096", 00:19:15.489 "ffdhe6144", 00:19:15.489 "ffdhe8192" 00:19:15.489 ] 00:19:15.489 } 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "method": "bdev_nvme_attach_controller", 00:19:15.489 "params": { 00:19:15.489 "name": "TLSTEST", 00:19:15.489 "trtype": "TCP", 00:19:15.489 "adrfam": "IPv4", 00:19:15.489 "traddr": "10.0.0.2", 00:19:15.489 "trsvcid": "4420", 00:19:15.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.489 "prchk_reftag": false, 00:19:15.489 "prchk_guard": false, 00:19:15.489 "ctrlr_loss_timeout_sec": 0, 00:19:15.489 "reconnect_delay_sec": 0, 00:19:15.489 "fast_io_fail_timeout_sec": 0, 00:19:15.489 "psk": "key0", 00:19:15.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.489 "hdgst": false, 00:19:15.489 "ddgst": false, 00:19:15.489 "multipath": "multipath" 00:19:15.489 } 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "method": "bdev_nvme_set_hotplug", 00:19:15.489 "params": { 00:19:15.489 "period_us": 100000, 00:19:15.489 "enable": false 00:19:15.489 } 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "method": "bdev_wait_for_examine" 00:19:15.489 } 00:19:15.489 ] 00:19:15.489 }, 00:19:15.489 { 00:19:15.489 "subsystem": "nbd", 00:19:15.489 "config": [] 00:19:15.489 } 00:19:15.489 ] 00:19:15.489 }' 00:19:15.489 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.489 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.489 [2024-11-20 12:28:58.558331] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:15.489 [2024-11-20 12:28:58.558377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463022 ] 00:19:15.748 [2024-11-20 12:28:58.632932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.748 [2024-11-20 12:28:58.673494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.748 [2024-11-20 12:28:58.826219] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.315 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.315 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.315 12:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:16.573 Running I/O for 10 seconds... 00:19:18.446 5236.00 IOPS, 20.45 MiB/s [2024-11-20T11:29:02.938Z] 5298.00 IOPS, 20.70 MiB/s [2024-11-20T11:29:03.875Z] 5363.33 IOPS, 20.95 MiB/s [2024-11-20T11:29:04.810Z] 5339.25 IOPS, 20.86 MiB/s [2024-11-20T11:29:05.745Z] 5339.20 IOPS, 20.86 MiB/s [2024-11-20T11:29:06.679Z] 5361.67 IOPS, 20.94 MiB/s [2024-11-20T11:29:07.614Z] 5365.86 IOPS, 20.96 MiB/s [2024-11-20T11:29:08.548Z] 5382.38 IOPS, 21.02 MiB/s [2024-11-20T11:29:09.925Z] 5397.11 IOPS, 21.08 MiB/s [2024-11-20T11:29:09.925Z] 5406.60 IOPS, 21.12 MiB/s 00:19:26.809 Latency(us) 00:19:26.809 [2024-11-20T11:29:09.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.809 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.809 Verification LBA range: start 0x0 length 0x2000 00:19:26.809 TLSTESTn1 : 10.01 5412.45 21.14 0.00 0.00 23612.97 5356.86 29861.62 00:19:26.809 [2024-11-20T11:29:09.925Z] =================================================================================================================== 00:19:26.809 [2024-11-20T11:29:09.925Z] Total : 5412.45 21.14 0.00 0.00 23612.97 5356.86 29861.62 00:19:26.809 { 00:19:26.809 "results": [ 00:19:26.809 { 00:19:26.809 "job": "TLSTESTn1", 00:19:26.809 "core_mask": "0x4", 00:19:26.809 "workload": "verify", 00:19:26.809 "status": "finished", 00:19:26.809 "verify_range": { 00:19:26.809 "start": 0, 00:19:26.809 "length": 8192 00:19:26.809 }, 00:19:26.809 "queue_depth": 128, 00:19:26.809 "io_size": 4096, 00:19:26.809 "runtime": 10.012285, 00:19:26.809 "iops": 5412.450804187056, 00:19:26.809 "mibps": 21.142385953855687, 00:19:26.809 "io_failed": 0, 00:19:26.809 "io_timeout": 0, 00:19:26.809 "avg_latency_us": 23612.971885432606, 00:19:26.809 "min_latency_us": 5356.855652173913, 00:19:26.809 "max_latency_us": 29861.620869565217 00:19:26.809 } 00:19:26.809 ], 00:19:26.809 "core_count": 1 00:19:26.809 } 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 463022 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 463022 ']' 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 463022 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463022 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463022' 00:19:26.809 killing process with pid 463022 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 463022 00:19:26.809 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.809 00:19:26.809 Latency(us) 00:19:26.809 [2024-11-20T11:29:09.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.809 [2024-11-20T11:29:09.925Z] =================================================================================================================== 00:19:26.809 [2024-11-20T11:29:09.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 463022 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 462779 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 462779 ']' 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 462779 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462779 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:26.809 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462779' 00:19:26.810 killing process with pid 462779 00:19:26.810 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 462779 00:19:26.810 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 462779 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=464864 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 464864 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 464864 ']' 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.069 12:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.069 [2024-11-20 12:29:10.050441] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:27.069 [2024-11-20 12:29:10.050496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.069 [2024-11-20 12:29:10.129305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.069 [2024-11-20 12:29:10.168709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.069 [2024-11-20 12:29:10.168750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.069 [2024-11-20 12:29:10.168757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.069 [2024-11-20 12:29:10.168763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.069 [2024-11-20 12:29:10.168768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.069 [2024-11-20 12:29:10.169345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.004 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.004 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.004 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.004 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.005 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.005 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.005 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.6x3LmvrJxv 00:19:28.005 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6x3LmvrJxv 00:19:28.005 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.005 [2024-11-20 12:29:11.091044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.263 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.263 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.521 [2024-11-20 12:29:11.484029] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.521 [2024-11-20 12:29:11.484224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.521 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.780 malloc0 00:19:28.780 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.038 12:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:29.038 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=465345 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 465345 /var/tmp/bdevperf.sock 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 465345 ']' 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.297 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.297 [2024-11-20 12:29:12.356391] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:29.297 [2024-11-20 12:29:12.356439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465345 ] 00:19:29.556 [2024-11-20 12:29:12.433221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.556 [2024-11-20 12:29:12.474365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.556 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.556 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.556 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:29.815 12:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:29.815 [2024-11-20 12:29:12.929806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.073 nvme0n1 00:19:30.073 12:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.073 Running I/O for 1 seconds... 00:19:31.009 5275.00 IOPS, 20.61 MiB/s 00:19:31.009 Latency(us) 00:19:31.009 [2024-11-20T11:29:14.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.009 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:31.009 Verification LBA range: start 0x0 length 0x2000 00:19:31.009 nvme0n1 : 1.02 5314.24 20.76 0.00 0.00 23893.18 4843.97 21769.35 00:19:31.009 [2024-11-20T11:29:14.125Z] =================================================================================================================== 00:19:31.009 [2024-11-20T11:29:14.125Z] Total : 5314.24 20.76 0.00 0.00 23893.18 4843.97 21769.35 00:19:31.009 { 00:19:31.009 "results": [ 00:19:31.009 { 00:19:31.009 "job": "nvme0n1", 00:19:31.009 "core_mask": "0x2", 00:19:31.009 "workload": "verify", 00:19:31.009 "status": "finished", 00:19:31.009 "verify_range": { 00:19:31.009 "start": 0, 00:19:31.009 "length": 8192 00:19:31.009 }, 00:19:31.009 "queue_depth": 128, 00:19:31.009 "io_size": 4096, 00:19:31.009 "runtime": 1.016703, 00:19:31.009 "iops": 5314.236310899053, 00:19:31.009 "mibps": 20.758735589449426, 00:19:31.009 "io_failed": 0, 00:19:31.009 "io_timeout": 0, 00:19:31.009 "avg_latency_us": 23893.177099357043, 00:19:31.009 "min_latency_us": 4843.965217391305, 00:19:31.009 "max_latency_us": 21769.34956521739 00:19:31.009 } 00:19:31.009 ], 00:19:31.009 "core_count": 1 00:19:31.009 } 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 465345 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 465345 ']' 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 465345 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465345 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465345' 00:19:31.268 killing process with pid 465345 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 465345 00:19:31.268 Received shutdown signal, test time was about 1.000000 seconds 00:19:31.268 00:19:31.268 Latency(us) 00:19:31.268 [2024-11-20T11:29:14.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.268 [2024-11-20T11:29:14.384Z] =================================================================================================================== 00:19:31.268 [2024-11-20T11:29:14.384Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 465345 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 464864 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 464864 ']' 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 464864 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.268 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 464864 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 464864' 00:19:31.528 killing process with pid 464864 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 464864 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 464864 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:31.528 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=465598 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 465598 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 465598 ']' 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.529 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.529 [2024-11-20 12:29:14.633285] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:31.529 [2024-11-20 12:29:14.633338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.788 [2024-11-20 12:29:14.711795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.788 [2024-11-20 12:29:14.750290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.788 [2024-11-20 12:29:14.750326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.788 [2024-11-20 12:29:14.750333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.788 [2024-11-20 12:29:14.750339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.788 [2024-11-20 12:29:14.750344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.788 [2024-11-20 12:29:14.750935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.788 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.788 [2024-11-20 12:29:14.894952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.048 malloc0 00:19:32.048 [2024-11-20 12:29:14.923304] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.048 [2024-11-20 12:29:14.923498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=465794 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 465794 /var/tmp/bdevperf.sock 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 465794 ']' 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.048 12:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.048 [2024-11-20 12:29:14.999218] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:32.048 [2024-11-20 12:29:14.999261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465794 ] 00:19:32.048 [2024-11-20 12:29:15.073973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.048 [2024-11-20 12:29:15.116542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.307 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.307 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.307 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6x3LmvrJxv 00:19:32.307 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:32.565 [2024-11-20 12:29:15.573363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.565 nvme0n1 00:19:32.565 12:29:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.824 Running I/O for 1 seconds... 00:19:33.761 5379.00 IOPS, 21.01 MiB/s 00:19:33.761 Latency(us) 00:19:33.761 [2024-11-20T11:29:16.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.761 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:33.761 Verification LBA range: start 0x0 length 0x2000 00:19:33.761 nvme0n1 : 1.01 5434.95 21.23 0.00 0.00 23397.67 4957.94 22681.15 00:19:33.761 [2024-11-20T11:29:16.877Z] =================================================================================================================== 00:19:33.761 [2024-11-20T11:29:16.877Z] Total : 5434.95 21.23 0.00 0.00 23397.67 4957.94 22681.15 00:19:33.761 { 00:19:33.761 "results": [ 00:19:33.761 { 00:19:33.761 "job": "nvme0n1", 00:19:33.761 "core_mask": "0x2", 00:19:33.761 "workload": "verify", 00:19:33.761 "status": "finished", 00:19:33.761 "verify_range": { 00:19:33.761 "start": 0, 00:19:33.761 "length": 8192 00:19:33.761 }, 00:19:33.761 "queue_depth": 128, 00:19:33.761 "io_size": 4096, 00:19:33.761 "runtime": 1.01344, 00:19:33.761 "iops": 5434.954215345753, 00:19:33.761 "mibps": 21.23028990369435, 00:19:33.761 "io_failed": 0, 00:19:33.761 "io_timeout": 0, 00:19:33.761 "avg_latency_us": 23397.669610684854, 00:19:33.761 "min_latency_us": 4957.940869565217, 00:19:33.761 "max_latency_us": 22681.154782608697 00:19:33.761 } 00:19:33.761 ], 00:19:33.761 "core_count": 1 00:19:33.761 } 00:19:33.761 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:33.761 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.761 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.020 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.020 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:34.020 "subsystems": [ 00:19:34.020 { 00:19:34.020 "subsystem": "keyring", 00:19:34.020 "config": [ 00:19:34.020 { 00:19:34.020 "method": "keyring_file_add_key", 00:19:34.020 "params": { 00:19:34.020 "name": "key0", 00:19:34.020 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:34.020 } 00:19:34.020 } 00:19:34.020 ] 00:19:34.020 }, 00:19:34.020 { 00:19:34.020 "subsystem": "iobuf", 00:19:34.020 "config": [ 00:19:34.020 { 00:19:34.020 "method": "iobuf_set_options", 00:19:34.020 "params": { 00:19:34.020 "small_pool_count": 8192, 00:19:34.020 "large_pool_count": 1024, 00:19:34.020 "small_bufsize": 8192, 00:19:34.020 "large_bufsize": 135168, 00:19:34.020 "enable_numa": false 00:19:34.020 } 00:19:34.020 } 00:19:34.020 ] 00:19:34.020 }, 00:19:34.020 { 00:19:34.020 "subsystem": "sock", 00:19:34.020 "config": [ 00:19:34.020 { 00:19:34.020 "method": "sock_set_default_impl", 00:19:34.020 "params": { 00:19:34.020 "impl_name": "posix" 00:19:34.020 } 00:19:34.020 }, 00:19:34.020 { 00:19:34.020 "method": "sock_impl_set_options", 00:19:34.020 "params": { 00:19:34.020 "impl_name": "ssl", 00:19:34.020 "recv_buf_size": 4096, 00:19:34.020 "send_buf_size": 4096, 00:19:34.020 "enable_recv_pipe": true, 00:19:34.020 "enable_quickack": false, 00:19:34.020 "enable_placement_id": 0, 00:19:34.020 "enable_zerocopy_send_server": true, 00:19:34.020 "enable_zerocopy_send_client": false, 00:19:34.020 "zerocopy_threshold": 0, 00:19:34.020 "tls_version": 0, 00:19:34.020 "enable_ktls": false 00:19:34.020 } 00:19:34.020 }, 00:19:34.020 { 00:19:34.020 "method": "sock_impl_set_options", 00:19:34.020 "params": { 00:19:34.020 "impl_name": "posix", 00:19:34.020 "recv_buf_size": 2097152, 00:19:34.020 "send_buf_size": 2097152, 00:19:34.020 "enable_recv_pipe": true, 00:19:34.020 "enable_quickack": false, 00:19:34.020 "enable_placement_id": 0, 00:19:34.020 "enable_zerocopy_send_server": true, 00:19:34.021 "enable_zerocopy_send_client": false, 00:19:34.021 "zerocopy_threshold": 0, 00:19:34.021 "tls_version": 0, 00:19:34.021 "enable_ktls": false 00:19:34.021 } 00:19:34.021 } 00:19:34.021 ] 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "subsystem": "vmd", 00:19:34.021 "config": [] 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "subsystem": "accel", 00:19:34.021 "config": [ 00:19:34.021 { 00:19:34.021 "method": "accel_set_options", 00:19:34.021 "params": { 00:19:34.021 "small_cache_size": 128, 00:19:34.021 "large_cache_size": 16, 00:19:34.021 "task_count": 2048, 00:19:34.021 "sequence_count": 2048, 00:19:34.021 "buf_count": 2048 00:19:34.021 } 00:19:34.021 } 00:19:34.021 ] 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "subsystem": "bdev", 00:19:34.021 "config": [ 00:19:34.021 { 00:19:34.021 "method": "bdev_set_options", 00:19:34.021 "params": { 00:19:34.021 "bdev_io_pool_size": 65535, 00:19:34.021 "bdev_io_cache_size": 256, 00:19:34.021 "bdev_auto_examine": true, 00:19:34.021 "iobuf_small_cache_size": 128, 00:19:34.021 "iobuf_large_cache_size": 16 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "bdev_raid_set_options", 00:19:34.021 "params": { 00:19:34.021 "process_window_size_kb": 1024, 00:19:34.021 "process_max_bandwidth_mb_sec": 0 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "bdev_iscsi_set_options", 00:19:34.021 "params": { 00:19:34.021 "timeout_sec": 30 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "bdev_nvme_set_options", 00:19:34.021 "params": { 00:19:34.021 "action_on_timeout": "none", 00:19:34.021 "timeout_us": 0, 00:19:34.021 "timeout_admin_us": 0, 00:19:34.021 "keep_alive_timeout_ms": 10000, 00:19:34.021 "arbitration_burst": 0, 00:19:34.021 "low_priority_weight": 0, 00:19:34.021 "medium_priority_weight": 0, 00:19:34.021 "high_priority_weight": 0, 00:19:34.021 "nvme_adminq_poll_period_us": 10000, 00:19:34.021 "nvme_ioq_poll_period_us": 0, 00:19:34.021 "io_queue_requests": 0, 00:19:34.021 "delay_cmd_submit": true, 00:19:34.021 "transport_retry_count": 4, 00:19:34.021 "bdev_retry_count": 3, 00:19:34.021 "transport_ack_timeout": 0, 00:19:34.021 "ctrlr_loss_timeout_sec": 0, 00:19:34.021 "reconnect_delay_sec": 0, 00:19:34.021 "fast_io_fail_timeout_sec": 0, 00:19:34.021 "disable_auto_failback": false, 00:19:34.021 "generate_uuids": false, 00:19:34.021 "transport_tos": 0, 00:19:34.021 "nvme_error_stat": false, 00:19:34.021 "rdma_srq_size": 0, 00:19:34.021 "io_path_stat": false, 00:19:34.021 "allow_accel_sequence": false, 00:19:34.021 "rdma_max_cq_size": 0, 00:19:34.021 "rdma_cm_event_timeout_ms": 0, 00:19:34.021 "dhchap_digests": [ 00:19:34.021 "sha256", 00:19:34.021 "sha384", 00:19:34.021 "sha512" 00:19:34.021 ], 00:19:34.021 "dhchap_dhgroups": [ 00:19:34.021 "null", 00:19:34.021 "ffdhe2048", 00:19:34.021 "ffdhe3072", 00:19:34.021 "ffdhe4096", 00:19:34.021 "ffdhe6144", 00:19:34.021 "ffdhe8192" 00:19:34.021 ] 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "bdev_nvme_set_hotplug", 00:19:34.021 "params": { 00:19:34.021 "period_us": 100000, 00:19:34.021 "enable": false 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "bdev_malloc_create", 00:19:34.021 "params": { 00:19:34.021 "name": "malloc0", 00:19:34.021 "num_blocks": 8192, 00:19:34.021 "block_size": 4096, 00:19:34.021 "physical_block_size": 4096, 00:19:34.021 "uuid": "c6f1d290-531a-4086-94a9-358776158b1b", 00:19:34.021 "optimal_io_boundary": 0, 00:19:34.021 "md_size": 0, 00:19:34.021 "dif_type": 0, 00:19:34.021 "dif_is_head_of_md": false, 00:19:34.021 "dif_pi_format": 0 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "bdev_wait_for_examine" 00:19:34.021 } 00:19:34.021 ] 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "subsystem": "nbd", 00:19:34.021 "config": [] 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "subsystem": "scheduler", 00:19:34.021 "config": [ 00:19:34.021 { 00:19:34.021 "method": "framework_set_scheduler", 00:19:34.021 "params": { 00:19:34.021 "name": "static" 00:19:34.021 } 00:19:34.021 } 00:19:34.021 ] 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "subsystem": "nvmf", 00:19:34.021 "config": [ 00:19:34.021 { 00:19:34.021 "method": "nvmf_set_config", 00:19:34.021 "params": { 00:19:34.021 "discovery_filter": "match_any", 00:19:34.021 "admin_cmd_passthru": { 00:19:34.021 "identify_ctrlr": false 00:19:34.021 }, 00:19:34.021 "dhchap_digests": [ 00:19:34.021 "sha256", 00:19:34.021 "sha384", 00:19:34.021 "sha512" 00:19:34.021 ], 00:19:34.021 "dhchap_dhgroups": [ 00:19:34.021 "null", 00:19:34.021 "ffdhe2048", 00:19:34.021 "ffdhe3072", 00:19:34.021 "ffdhe4096", 00:19:34.021 "ffdhe6144", 00:19:34.021 "ffdhe8192" 00:19:34.021 ] 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "nvmf_set_max_subsystems", 00:19:34.021 "params": { 00:19:34.021 "max_subsystems": 1024 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "nvmf_set_crdt", 00:19:34.021 "params": { 00:19:34.021 "crdt1": 0, 00:19:34.021 "crdt2": 0, 00:19:34.021 "crdt3": 0 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "nvmf_create_transport", 00:19:34.021 "params": { 00:19:34.021 "trtype": "TCP", 00:19:34.021 "max_queue_depth": 128, 00:19:34.021 "max_io_qpairs_per_ctrlr": 127, 00:19:34.021 "in_capsule_data_size": 4096, 00:19:34.021 "max_io_size": 131072, 00:19:34.021 "io_unit_size": 131072, 00:19:34.021 "max_aq_depth": 128, 00:19:34.021 "num_shared_buffers": 511, 00:19:34.021 "buf_cache_size": 4294967295, 00:19:34.021 "dif_insert_or_strip": false, 00:19:34.021 "zcopy": false, 00:19:34.021 "c2h_success": false, 00:19:34.021 "sock_priority": 0, 00:19:34.021 "abort_timeout_sec": 1, 00:19:34.021 "ack_timeout": 0, 00:19:34.021 "data_wr_pool_size": 0 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "nvmf_create_subsystem", 00:19:34.021 "params": { 00:19:34.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.021 "allow_any_host": false, 00:19:34.021 "serial_number": "00000000000000000000", 00:19:34.021 "model_number": "SPDK bdev Controller", 00:19:34.021 "max_namespaces": 32, 00:19:34.021 "min_cntlid": 1, 00:19:34.021 "max_cntlid": 65519, 00:19:34.021 "ana_reporting": false 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "nvmf_subsystem_add_host", 00:19:34.021 "params": { 00:19:34.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.021 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.021 "psk": "key0" 00:19:34.021 } 00:19:34.021 }, 00:19:34.021 { 00:19:34.021 "method": "nvmf_subsystem_add_ns", 00:19:34.021 "params": { 00:19:34.021 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.022 "namespace": { 00:19:34.022 "nsid": 1, 00:19:34.022 "bdev_name": "malloc0", 00:19:34.022 "nguid": "C6F1D290531A408694A9358776158B1B", 00:19:34.022 "uuid": "c6f1d290-531a-4086-94a9-358776158b1b", 00:19:34.022 "no_auto_visible": false 00:19:34.022 } 00:19:34.022 } 00:19:34.022 }, 00:19:34.022 { 00:19:34.022 "method": "nvmf_subsystem_add_listener", 00:19:34.022 "params": { 00:19:34.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.022 "listen_address": { 00:19:34.022 "trtype": "TCP", 00:19:34.022 "adrfam": "IPv4", 00:19:34.022 "traddr": "10.0.0.2", 00:19:34.022 "trsvcid": "4420" 00:19:34.022 }, 00:19:34.022 "secure_channel": false, 00:19:34.022 "sock_impl": "ssl" 00:19:34.022 } 00:19:34.022 } 00:19:34.022 ] 00:19:34.022 } 00:19:34.022 ] 00:19:34.022 }' 00:19:34.022 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:34.281 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:34.281 "subsystems": [ 00:19:34.281 { 00:19:34.281 "subsystem": "keyring", 00:19:34.281 "config": [ 00:19:34.281 { 00:19:34.281 "method": "keyring_file_add_key", 00:19:34.281 "params": { 00:19:34.281 "name": "key0", 00:19:34.281 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:34.281 } 00:19:34.281 } 00:19:34.281 ] 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "subsystem": "iobuf", 00:19:34.281 "config": [ 00:19:34.281 { 00:19:34.281 "method": "iobuf_set_options", 00:19:34.281 "params": { 00:19:34.281 "small_pool_count": 8192, 00:19:34.281 "large_pool_count": 1024, 00:19:34.281 "small_bufsize": 8192, 00:19:34.281 "large_bufsize": 135168, 00:19:34.281 "enable_numa": false 00:19:34.281 } 00:19:34.281 } 00:19:34.281 ] 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "subsystem": "sock", 00:19:34.281 "config": [ 00:19:34.281 { 00:19:34.281 "method": "sock_set_default_impl", 00:19:34.281 "params": { 00:19:34.281 "impl_name": "posix" 00:19:34.281 } 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "method": "sock_impl_set_options", 00:19:34.281 "params": { 00:19:34.281 "impl_name": "ssl", 00:19:34.281 "recv_buf_size": 4096, 00:19:34.281 "send_buf_size": 4096, 00:19:34.281 "enable_recv_pipe": true, 00:19:34.281 "enable_quickack": false, 00:19:34.281 "enable_placement_id": 0, 00:19:34.281 "enable_zerocopy_send_server": true, 00:19:34.281 "enable_zerocopy_send_client": false, 00:19:34.281 "zerocopy_threshold": 0, 00:19:34.281 "tls_version": 0, 00:19:34.281 "enable_ktls": false 00:19:34.281 } 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "method": "sock_impl_set_options", 00:19:34.281 "params": { 00:19:34.281 "impl_name": "posix", 00:19:34.281 "recv_buf_size": 2097152, 00:19:34.281 "send_buf_size": 2097152, 00:19:34.281 "enable_recv_pipe": true, 00:19:34.281 "enable_quickack": false, 00:19:34.281 "enable_placement_id": 0, 00:19:34.281 "enable_zerocopy_send_server": true, 00:19:34.281 "enable_zerocopy_send_client": false, 00:19:34.281 "zerocopy_threshold": 0, 00:19:34.281 "tls_version": 0, 00:19:34.281 "enable_ktls": false 00:19:34.281 } 00:19:34.281 } 00:19:34.281 ] 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "subsystem": "vmd", 00:19:34.281 "config": [] 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "subsystem": "accel", 00:19:34.281 "config": [ 00:19:34.281 { 00:19:34.281 "method": "accel_set_options", 00:19:34.281 "params": { 00:19:34.281 "small_cache_size": 128, 00:19:34.281 "large_cache_size": 16, 00:19:34.281 "task_count": 2048, 00:19:34.281 "sequence_count": 2048, 00:19:34.281 "buf_count": 2048 00:19:34.281 } 00:19:34.281 } 00:19:34.281 ] 00:19:34.281 }, 00:19:34.281 { 00:19:34.281 "subsystem": "bdev", 00:19:34.281 "config": [ 00:19:34.281 { 00:19:34.281 "method": "bdev_set_options", 00:19:34.281 "params": { 00:19:34.281 "bdev_io_pool_size": 65535, 00:19:34.281 "bdev_io_cache_size": 256, 00:19:34.282 "bdev_auto_examine": true, 00:19:34.282 "iobuf_small_cache_size": 128, 00:19:34.282 "iobuf_large_cache_size": 16 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_raid_set_options", 00:19:34.282 "params": { 00:19:34.282 "process_window_size_kb": 1024, 00:19:34.282 "process_max_bandwidth_mb_sec": 0 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_iscsi_set_options", 00:19:34.282 "params": { 00:19:34.282 "timeout_sec": 30 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_nvme_set_options", 00:19:34.282 "params": { 00:19:34.282 "action_on_timeout": "none", 00:19:34.282 "timeout_us": 0, 00:19:34.282 "timeout_admin_us": 0, 00:19:34.282 "keep_alive_timeout_ms": 10000, 00:19:34.282 "arbitration_burst": 0, 00:19:34.282 "low_priority_weight": 0, 00:19:34.282 "medium_priority_weight": 0, 00:19:34.282 "high_priority_weight": 0, 00:19:34.282 "nvme_adminq_poll_period_us": 10000, 00:19:34.282 "nvme_ioq_poll_period_us": 0, 00:19:34.282 "io_queue_requests": 512, 00:19:34.282 "delay_cmd_submit": true, 00:19:34.282 "transport_retry_count": 4, 00:19:34.282 "bdev_retry_count": 3, 00:19:34.282 "transport_ack_timeout": 0, 00:19:34.282 "ctrlr_loss_timeout_sec": 0, 00:19:34.282 "reconnect_delay_sec": 0, 00:19:34.282 "fast_io_fail_timeout_sec": 0, 00:19:34.282 "disable_auto_failback": false, 00:19:34.282 "generate_uuids": false, 00:19:34.282 "transport_tos": 0, 00:19:34.282 "nvme_error_stat": false, 00:19:34.282 "rdma_srq_size": 0, 00:19:34.282 "io_path_stat": false, 00:19:34.282 "allow_accel_sequence": false, 00:19:34.282 "rdma_max_cq_size": 0, 00:19:34.282 "rdma_cm_event_timeout_ms": 0, 00:19:34.282 "dhchap_digests": [ 00:19:34.282 "sha256", 00:19:34.282 "sha384", 00:19:34.282 "sha512" 00:19:34.282 ], 00:19:34.282 "dhchap_dhgroups": [ 00:19:34.282 "null", 00:19:34.282 "ffdhe2048", 00:19:34.282 "ffdhe3072", 00:19:34.282 "ffdhe4096", 00:19:34.282 "ffdhe6144", 00:19:34.282 "ffdhe8192" 00:19:34.282 ] 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_nvme_attach_controller", 00:19:34.282 "params": { 00:19:34.282 "name": "nvme0", 00:19:34.282 "trtype": "TCP", 00:19:34.282 "adrfam": "IPv4", 00:19:34.282 "traddr": "10.0.0.2", 00:19:34.282 "trsvcid": "4420", 00:19:34.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.282 "prchk_reftag": false, 00:19:34.282 "prchk_guard": false, 00:19:34.282 "ctrlr_loss_timeout_sec": 0, 00:19:34.282 "reconnect_delay_sec": 0, 00:19:34.282 "fast_io_fail_timeout_sec": 0, 00:19:34.282 "psk": "key0", 00:19:34.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.282 "hdgst": false, 00:19:34.282 "ddgst": false, 00:19:34.282 "multipath": "multipath" 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_nvme_set_hotplug", 00:19:34.282 "params": { 00:19:34.282 "period_us": 100000, 00:19:34.282 "enable": false 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_enable_histogram", 00:19:34.282 "params": { 00:19:34.282 "name": "nvme0n1", 00:19:34.282 "enable": true 00:19:34.282 } 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "method": "bdev_wait_for_examine" 00:19:34.282 } 00:19:34.282 ] 00:19:34.282 }, 00:19:34.282 { 00:19:34.282 "subsystem": "nbd", 00:19:34.282 "config": [] 00:19:34.282 } 00:19:34.282 ] 00:19:34.282 }' 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 465794 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 465794 ']' 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 465794 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465794 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465794' 00:19:34.282 killing process with pid 465794 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 465794 00:19:34.282 Received shutdown signal, test time was about 1.000000 seconds 00:19:34.282 00:19:34.282 Latency(us) 00:19:34.282 [2024-11-20T11:29:17.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.282 [2024-11-20T11:29:17.398Z] =================================================================================================================== 00:19:34.282 [2024-11-20T11:29:17.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 465794 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 465598 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 465598 ']' 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 465598 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.282 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465598 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465598' 00:19:34.541 killing process with pid 465598 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 465598 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 465598 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.541 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:34.541 "subsystems": [ 00:19:34.541 { 00:19:34.541 "subsystem": "keyring", 00:19:34.541 "config": [ 00:19:34.541 { 00:19:34.542 "method": "keyring_file_add_key", 00:19:34.542 "params": { 00:19:34.542 "name": "key0", 00:19:34.542 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:34.542 } 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "iobuf", 00:19:34.542 "config": [ 00:19:34.542 { 00:19:34.542 "method": "iobuf_set_options", 00:19:34.542 "params": { 00:19:34.542 "small_pool_count": 8192, 00:19:34.542 "large_pool_count": 1024, 00:19:34.542 "small_bufsize": 8192, 00:19:34.542 "large_bufsize": 135168, 00:19:34.542 "enable_numa": false 00:19:34.542 } 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "sock", 00:19:34.542 "config": [ 00:19:34.542 { 00:19:34.542 "method": "sock_set_default_impl", 00:19:34.542 "params": { 00:19:34.542 "impl_name": "posix" 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "sock_impl_set_options", 00:19:34.542 "params": { 00:19:34.542 "impl_name": "ssl", 00:19:34.542 "recv_buf_size": 4096, 00:19:34.542 "send_buf_size": 4096, 00:19:34.542 "enable_recv_pipe": true, 00:19:34.542 "enable_quickack": false, 00:19:34.542 "enable_placement_id": 0, 00:19:34.542 "enable_zerocopy_send_server": true, 00:19:34.542 "enable_zerocopy_send_client": false, 00:19:34.542 "zerocopy_threshold": 0, 00:19:34.542 "tls_version": 0, 00:19:34.542 "enable_ktls": false 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "sock_impl_set_options", 00:19:34.542 "params": { 00:19:34.542 "impl_name": "posix", 00:19:34.542 "recv_buf_size": 2097152, 00:19:34.542 "send_buf_size": 2097152, 00:19:34.542 "enable_recv_pipe": true, 00:19:34.542 "enable_quickack": false, 00:19:34.542 "enable_placement_id": 0, 00:19:34.542 "enable_zerocopy_send_server": true, 00:19:34.542 "enable_zerocopy_send_client": false, 00:19:34.542 "zerocopy_threshold": 0, 00:19:34.542 "tls_version": 0, 00:19:34.542 "enable_ktls": false 00:19:34.542 } 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "vmd", 00:19:34.542 "config": [] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "accel", 00:19:34.542 "config": [ 00:19:34.542 { 00:19:34.542 "method": "accel_set_options", 00:19:34.542 "params": { 00:19:34.542 "small_cache_size": 128, 00:19:34.542 "large_cache_size": 16, 00:19:34.542 "task_count": 2048, 00:19:34.542 "sequence_count": 2048, 00:19:34.542 "buf_count": 2048 00:19:34.542 } 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "bdev", 00:19:34.542 "config": [ 00:19:34.542 { 00:19:34.542 "method": "bdev_set_options", 00:19:34.542 "params": { 00:19:34.542 "bdev_io_pool_size": 65535, 00:19:34.542 "bdev_io_cache_size": 256, 00:19:34.542 "bdev_auto_examine": true, 00:19:34.542 "iobuf_small_cache_size": 128, 00:19:34.542 "iobuf_large_cache_size": 16 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "bdev_raid_set_options", 00:19:34.542 "params": { 00:19:34.542 "process_window_size_kb": 1024, 00:19:34.542 "process_max_bandwidth_mb_sec": 0 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "bdev_iscsi_set_options", 00:19:34.542 "params": { 00:19:34.542 "timeout_sec": 30 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "bdev_nvme_set_options", 00:19:34.542 "params": { 00:19:34.542 "action_on_timeout": "none", 00:19:34.542 "timeout_us": 0, 00:19:34.542 "timeout_admin_us": 0, 00:19:34.542 "keep_alive_timeout_ms": 10000, 00:19:34.542 "arbitration_burst": 0, 00:19:34.542 "low_priority_weight": 0, 00:19:34.542 "medium_priority_weight": 0, 00:19:34.542 "high_priority_weight": 0, 00:19:34.542 "nvme_adminq_poll_period_us": 10000, 00:19:34.542 "nvme_ioq_poll_period_us": 0, 00:19:34.542 "io_queue_requests": 0, 00:19:34.542 "delay_cmd_submit": true, 00:19:34.542 "transport_retry_count": 4, 00:19:34.542 "bdev_retry_count": 3, 00:19:34.542 "transport_ack_timeout": 0, 00:19:34.542 "ctrlr_loss_timeout_sec": 0, 00:19:34.542 "reconnect_delay_sec": 0, 00:19:34.542 "fast_io_fail_timeout_sec": 0, 00:19:34.542 "disable_auto_failback": false, 00:19:34.542 "generate_uuids": false, 00:19:34.542 "transport_tos": 0, 00:19:34.542 "nvme_error_stat": false, 00:19:34.542 "rdma_srq_size": 0, 00:19:34.542 "io_path_stat": false, 00:19:34.542 "allow_accel_sequence": false, 00:19:34.542 "rdma_max_cq_size": 0, 00:19:34.542 "rdma_cm_event_timeout_ms": 0, 00:19:34.542 "dhchap_digests": [ 00:19:34.542 "sha256", 00:19:34.542 "sha384", 00:19:34.542 "sha512" 00:19:34.542 ], 00:19:34.542 "dhchap_dhgroups": [ 00:19:34.542 "null", 00:19:34.542 "ffdhe2048", 00:19:34.542 "ffdhe3072", 00:19:34.542 "ffdhe4096", 00:19:34.542 "ffdhe6144", 00:19:34.542 "ffdhe8192" 00:19:34.542 ] 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "bdev_nvme_set_hotplug", 00:19:34.542 "params": { 00:19:34.542 "period_us": 100000, 00:19:34.542 "enable": false 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "bdev_malloc_create", 00:19:34.542 "params": { 00:19:34.542 "name": "malloc0", 00:19:34.542 "num_blocks": 8192, 00:19:34.542 "block_size": 4096, 00:19:34.542 "physical_block_size": 4096, 00:19:34.542 "uuid": "c6f1d290-531a-4086-94a9-358776158b1b", 00:19:34.542 "optimal_io_boundary": 0, 00:19:34.542 "md_size": 0, 00:19:34.542 "dif_type": 0, 00:19:34.542 "dif_is_head_of_md": false, 00:19:34.542 "dif_pi_format": 0 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "bdev_wait_for_examine" 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "nbd", 00:19:34.542 "config": [] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "scheduler", 00:19:34.542 "config": [ 00:19:34.542 { 00:19:34.542 "method": "framework_set_scheduler", 00:19:34.542 "params": { 00:19:34.542 "name": "static" 00:19:34.542 } 00:19:34.542 } 00:19:34.542 ] 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "subsystem": "nvmf", 00:19:34.542 "config": [ 00:19:34.542 { 00:19:34.542 "method": "nvmf_set_config", 00:19:34.542 "params": { 00:19:34.542 "discovery_filter": "match_any", 00:19:34.542 "admin_cmd_passthru": { 00:19:34.542 "identify_ctrlr": false 00:19:34.542 }, 00:19:34.542 "dhchap_digests": [ 00:19:34.542 "sha256", 00:19:34.542 "sha384", 00:19:34.542 "sha512" 00:19:34.542 ], 00:19:34.542 "dhchap_dhgroups": [ 00:19:34.542 "null", 00:19:34.542 "ffdhe2048", 00:19:34.542 "ffdhe3072", 00:19:34.542 "ffdhe4096", 00:19:34.542 "ffdhe6144", 00:19:34.542 "ffdhe8192" 00:19:34.542 ] 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "nvmf_set_max_subsystems", 00:19:34.542 "params": { 00:19:34.542 "max_subsystems": 1024 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "nvmf_set_crdt", 00:19:34.542 "params": { 00:19:34.542 "crdt1": 0, 00:19:34.542 "crdt2": 0, 00:19:34.542 "crdt3": 0 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "nvmf_create_transport", 00:19:34.542 "params": { 00:19:34.542 "trtype": "TCP", 00:19:34.542 "max_queue_depth": 128, 00:19:34.542 "max_io_qpairs_per_ctrlr": 127, 00:19:34.542 "in_capsule_data_size": 4096, 00:19:34.542 "max_io_size": 131072, 00:19:34.542 "io_unit_size": 131072, 00:19:34.542 "max_aq_depth": 128, 00:19:34.542 "num_shared_buffers": 511, 00:19:34.542 "buf_cache_size": 4294967295, 00:19:34.542 "dif_insert_or_strip": false, 00:19:34.542 "zcopy": false, 00:19:34.542 "c2h_success": false, 00:19:34.542 "sock_priority": 0, 00:19:34.542 "abort_timeout_sec": 1, 00:19:34.542 "ack_timeout": 0, 00:19:34.542 "data_wr_pool_size": 0 00:19:34.542 } 00:19:34.542 }, 00:19:34.542 { 00:19:34.542 "method": "nvmf_create_subsystem", 00:19:34.543 "params": { 00:19:34.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.543 "allow_any_host": false, 00:19:34.543 "serial_number": "00000000000000000000", 00:19:34.543 "model_number": "SPDK bdev Controller", 00:19:34.543 "max_namespaces": 32, 00:19:34.543 "min_cntlid": 1, 00:19:34.543 "max_cntlid": 65519, 00:19:34.543 "ana_reporting": false 00:19:34.543 } 00:19:34.543 }, 00:19:34.543 { 00:19:34.543 "method": "nvmf_subsystem_add_host", 00:19:34.543 "params": { 00:19:34.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.543 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.543 "psk": "key0" 00:19:34.543 } 00:19:34.543 }, 00:19:34.543 { 00:19:34.543 "method": "nvmf_subsystem_add_ns", 00:19:34.543 "params": { 00:19:34.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.543 "namespace": { 00:19:34.543 "nsid": 1, 00:19:34.543 "bdev_name": "malloc0", 00:19:34.543 "nguid": "C6F1D290531A408694A9358776158B1B", 00:19:34.543 "uuid": "c6f1d290-531a-4086-94a9-358776158b1b", 00:19:34.543 "no_auto_visible": false 00:19:34.543 } 00:19:34.543 } 00:19:34.543 }, 00:19:34.543 { 00:19:34.543 "method": "nvmf_subsystem_add_listener", 00:19:34.543 "params": { 00:19:34.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.543 "listen_address": { 00:19:34.543 "trtype": "TCP", 00:19:34.543 "adrfam": "IPv4", 00:19:34.543 "traddr": "10.0.0.2", 00:19:34.543 "trsvcid": "4420" 00:19:34.543 }, 00:19:34.543 "secure_channel": false, 00:19:34.543 "sock_impl": "ssl" 00:19:34.543 } 00:19:34.543 } 00:19:34.543 ] 00:19:34.543 } 00:19:34.543 ] 00:19:34.543 }' 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=466185 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 466185 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 466185 ']' 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.543 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.801 [2024-11-20 12:29:17.679266] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:34.801 [2024-11-20 12:29:17.679313] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.801 [2024-11-20 12:29:17.758315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.801 [2024-11-20 12:29:17.798695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.801 [2024-11-20 12:29:17.798732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.801 [2024-11-20 12:29:17.798739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.801 [2024-11-20 12:29:17.798745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.801 [2024-11-20 12:29:17.798750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.801 [2024-11-20 12:29:17.799366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.059 [2024-11-20 12:29:18.011141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.059 [2024-11-20 12:29:18.043165] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.059 [2024-11-20 12:29:18.043363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=466340 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 466340 /var/tmp/bdevperf.sock 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 466340 ']' 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.626 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:35.626 "subsystems": [ 00:19:35.626 { 00:19:35.626 "subsystem": "keyring", 00:19:35.626 "config": [ 00:19:35.626 { 00:19:35.626 "method": "keyring_file_add_key", 00:19:35.626 "params": { 00:19:35.626 "name": "key0", 00:19:35.626 "path": "/tmp/tmp.6x3LmvrJxv" 00:19:35.626 } 00:19:35.626 } 00:19:35.626 ] 00:19:35.626 }, 00:19:35.626 { 00:19:35.626 "subsystem": "iobuf", 00:19:35.626 "config": [ 00:19:35.626 { 00:19:35.626 "method": "iobuf_set_options", 00:19:35.626 "params": { 00:19:35.626 "small_pool_count": 8192, 00:19:35.626 "large_pool_count": 1024, 00:19:35.626 "small_bufsize": 8192, 00:19:35.626 "large_bufsize": 135168, 00:19:35.626 "enable_numa": false 00:19:35.626 } 00:19:35.626 } 00:19:35.626 ] 00:19:35.626 }, 00:19:35.626 { 00:19:35.626 "subsystem": "sock", 00:19:35.626 "config": [ 00:19:35.626 { 00:19:35.626 "method": "sock_set_default_impl", 00:19:35.626 "params": { 00:19:35.626 "impl_name": "posix" 00:19:35.626 } 00:19:35.626 }, 00:19:35.626 { 00:19:35.626 "method": "sock_impl_set_options", 00:19:35.626 "params": { 00:19:35.626 "impl_name": "ssl", 00:19:35.627 "recv_buf_size": 4096, 00:19:35.627 "send_buf_size": 4096, 00:19:35.627 "enable_recv_pipe": true, 00:19:35.627 "enable_quickack": false, 00:19:35.627 "enable_placement_id": 0, 00:19:35.627 "enable_zerocopy_send_server": true, 00:19:35.627 "enable_zerocopy_send_client": false, 00:19:35.627 "zerocopy_threshold": 0, 00:19:35.627 "tls_version": 0, 00:19:35.627 "enable_ktls": false 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "sock_impl_set_options", 00:19:35.627 "params": { 00:19:35.627 "impl_name": "posix", 00:19:35.627 "recv_buf_size": 2097152, 00:19:35.627 "send_buf_size": 2097152, 00:19:35.627 "enable_recv_pipe": true, 00:19:35.627 "enable_quickack": false, 00:19:35.627 "enable_placement_id": 0, 00:19:35.627 "enable_zerocopy_send_server": true, 00:19:35.627 "enable_zerocopy_send_client": false, 00:19:35.627 "zerocopy_threshold": 0, 00:19:35.627 "tls_version": 0, 00:19:35.627 "enable_ktls": false 00:19:35.627 } 00:19:35.627 } 00:19:35.627 ] 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "subsystem": "vmd", 00:19:35.627 "config": [] 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "subsystem": "accel", 00:19:35.627 "config": [ 00:19:35.627 { 00:19:35.627 "method": "accel_set_options", 00:19:35.627 "params": { 00:19:35.627 "small_cache_size": 128, 00:19:35.627 "large_cache_size": 16, 00:19:35.627 "task_count": 2048, 00:19:35.627 "sequence_count": 2048, 00:19:35.627 "buf_count": 2048 00:19:35.627 } 00:19:35.627 } 00:19:35.627 ] 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "subsystem": "bdev", 00:19:35.627 "config": [ 00:19:35.627 { 00:19:35.627 "method": "bdev_set_options", 00:19:35.627 "params": { 00:19:35.627 "bdev_io_pool_size": 65535, 00:19:35.627 "bdev_io_cache_size": 256, 00:19:35.627 "bdev_auto_examine": true, 00:19:35.627 "iobuf_small_cache_size": 128, 00:19:35.627 "iobuf_large_cache_size": 16 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_raid_set_options", 00:19:35.627 "params": { 00:19:35.627 "process_window_size_kb": 1024, 00:19:35.627 "process_max_bandwidth_mb_sec": 0 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_iscsi_set_options", 00:19:35.627 "params": { 00:19:35.627 "timeout_sec": 30 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_nvme_set_options", 00:19:35.627 "params": { 00:19:35.627 "action_on_timeout": "none", 00:19:35.627 "timeout_us": 0, 00:19:35.627 "timeout_admin_us": 0, 00:19:35.627 "keep_alive_timeout_ms": 10000, 00:19:35.627 "arbitration_burst": 0, 00:19:35.627 "low_priority_weight": 0, 00:19:35.627 "medium_priority_weight": 0, 00:19:35.627 "high_priority_weight": 0, 00:19:35.627 "nvme_adminq_poll_period_us": 10000, 00:19:35.627 "nvme_ioq_poll_period_us": 0, 00:19:35.627 "io_queue_requests": 512, 00:19:35.627 "delay_cmd_submit": true, 00:19:35.627 "transport_retry_count": 4, 00:19:35.627 "bdev_retry_count": 3, 00:19:35.627 "transport_ack_timeout": 0, 00:19:35.627 "ctrlr_loss_timeout_sec": 0, 00:19:35.627 "reconnect_delay_sec": 0, 00:19:35.627 "fast_io_fail_timeout_sec": 0, 00:19:35.627 "disable_auto_failback": false, 00:19:35.627 "generate_uuids": false, 00:19:35.627 "transport_tos": 0, 00:19:35.627 "nvme_error_stat": false, 00:19:35.627 "rdma_srq_size": 0, 00:19:35.627 "io_path_stat": false, 00:19:35.627 "allow_accel_sequence": false, 00:19:35.627 "rdma_max_cq_size": 0, 00:19:35.627 "rdma_cm_event_timeout_ms": 0, 00:19:35.627 "dhchap_digests": [ 00:19:35.627 "sha256", 00:19:35.627 "sha384", 00:19:35.627 "sha512" 00:19:35.627 ], 00:19:35.627 "dhchap_dhgroups": [ 00:19:35.627 "null", 00:19:35.627 "ffdhe2048", 00:19:35.627 "ffdhe3072", 00:19:35.627 "ffdhe4096", 00:19:35.627 "ffdhe6144", 00:19:35.627 "ffdhe8192" 00:19:35.627 ] 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_nvme_attach_controller", 00:19:35.627 "params": { 00:19:35.627 "name": "nvme0", 00:19:35.627 "trtype": "TCP", 00:19:35.627 "adrfam": "IPv4", 00:19:35.627 "traddr": "10.0.0.2", 00:19:35.627 "trsvcid": "4420", 00:19:35.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.627 "prchk_reftag": false, 00:19:35.627 "prchk_guard": false, 00:19:35.627 "ctrlr_loss_timeout_sec": 0, 00:19:35.627 "reconnect_delay_sec": 0, 00:19:35.627 "fast_io_fail_timeout_sec": 0, 00:19:35.627 "psk": "key0", 00:19:35.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.627 "hdgst": false, 00:19:35.627 "ddgst": false, 00:19:35.627 "multipath": "multipath" 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_nvme_set_hotplug", 00:19:35.627 "params": { 00:19:35.627 "period_us": 100000, 00:19:35.627 "enable": false 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_enable_histogram", 00:19:35.627 "params": { 00:19:35.627 "name": "nvme0n1", 00:19:35.627 "enable": true 00:19:35.627 } 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "method": "bdev_wait_for_examine" 00:19:35.627 } 00:19:35.627 ] 00:19:35.627 }, 00:19:35.627 { 00:19:35.627 "subsystem": "nbd", 00:19:35.627 "config": [] 00:19:35.627 } 00:19:35.627 ] 00:19:35.627 }' 00:19:35.627 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.627 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.627 [2024-11-20 12:29:18.587831] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:35.627 [2024-11-20 12:29:18.587880] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466340 ] 00:19:35.627 [2024-11-20 12:29:18.663404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.627 [2024-11-20 12:29:18.703844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.886 [2024-11-20 12:29:18.857545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.453 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.453 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.453 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.453 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:36.712 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.712 12:29:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.712 Running I/O for 1 seconds... 00:19:37.784 5056.00 IOPS, 19.75 MiB/s 00:19:37.784 Latency(us) 00:19:37.784 [2024-11-20T11:29:20.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.784 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.784 Verification LBA range: start 0x0 length 0x2000 00:19:37.784 nvme0n1 : 1.02 5091.47 19.89 0.00 0.00 24924.62 5527.82 21085.50 00:19:37.784 [2024-11-20T11:29:20.900Z] =================================================================================================================== 00:19:37.784 [2024-11-20T11:29:20.900Z] Total : 5091.47 19.89 0.00 0.00 24924.62 5527.82 21085.50 00:19:37.784 { 00:19:37.784 "results": [ 00:19:37.784 { 00:19:37.784 "job": "nvme0n1", 00:19:37.784 "core_mask": "0x2", 00:19:37.784 "workload": "verify", 00:19:37.784 "status": "finished", 00:19:37.784 "verify_range": { 00:19:37.784 "start": 0, 00:19:37.785 "length": 8192 00:19:37.785 }, 00:19:37.785 "queue_depth": 128, 00:19:37.785 "io_size": 4096, 00:19:37.785 "runtime": 1.018174, 00:19:37.785 "iops": 5091.467666626726, 00:19:37.785 "mibps": 19.88854557276065, 00:19:37.785 "io_failed": 0, 00:19:37.785 "io_timeout": 0, 00:19:37.785 "avg_latency_us": 24924.618958668812, 00:19:37.785 "min_latency_us": 5527.819130434783, 00:19:37.785 "max_latency_us": 21085.49565217391 00:19:37.785 } 00:19:37.785 ], 00:19:37.785 "core_count": 1 00:19:37.785 } 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:37.785 nvmf_trace.0 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 466340 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 466340 ']' 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 466340 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.785 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466340 00:19:38.082 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.082 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.082 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466340' 00:19:38.082 killing process with pid 466340 00:19:38.082 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 466340 00:19:38.082 Received shutdown signal, test time was about 1.000000 seconds 00:19:38.082 00:19:38.082 Latency(us) 00:19:38.082 [2024-11-20T11:29:21.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.082 [2024-11-20T11:29:21.198Z] =================================================================================================================== 00:19:38.082 [2024-11-20T11:29:21.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.082 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 466340 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.082 rmmod nvme_tcp 00:19:38.082 rmmod nvme_fabrics 00:19:38.082 rmmod nvme_keyring 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 466185 ']' 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 466185 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 466185 ']' 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 466185 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 466185 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 466185' 00:19:38.082 killing process with pid 466185 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 466185 00:19:38.082 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 466185 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.341 12:29:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.p6C9rXHAX9 /tmp/tmp.KlisWKYhZh /tmp/tmp.6x3LmvrJxv 00:19:40.878 00:19:40.878 real 1m20.995s 00:19:40.878 user 2m4.535s 00:19:40.878 sys 0m29.872s 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.878 ************************************ 00:19:40.878 END TEST nvmf_tls 00:19:40.878 ************************************ 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:40.878 ************************************ 00:19:40.878 START TEST nvmf_fips 00:19:40.878 ************************************ 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:40.878 * Looking for test storage... 00:19:40.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:40.878 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:40.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.879 --rc genhtml_branch_coverage=1 00:19:40.879 --rc genhtml_function_coverage=1 00:19:40.879 --rc genhtml_legend=1 00:19:40.879 --rc geninfo_all_blocks=1 00:19:40.879 --rc geninfo_unexecuted_blocks=1 00:19:40.879 00:19:40.879 ' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:40.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.879 --rc genhtml_branch_coverage=1 00:19:40.879 --rc genhtml_function_coverage=1 00:19:40.879 --rc genhtml_legend=1 00:19:40.879 --rc geninfo_all_blocks=1 00:19:40.879 --rc geninfo_unexecuted_blocks=1 00:19:40.879 00:19:40.879 ' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:40.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.879 --rc genhtml_branch_coverage=1 00:19:40.879 --rc genhtml_function_coverage=1 00:19:40.879 --rc genhtml_legend=1 00:19:40.879 --rc geninfo_all_blocks=1 00:19:40.879 --rc geninfo_unexecuted_blocks=1 00:19:40.879 00:19:40.879 ' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:40.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.879 --rc genhtml_branch_coverage=1 00:19:40.879 --rc genhtml_function_coverage=1 00:19:40.879 --rc genhtml_legend=1 00:19:40.879 --rc geninfo_all_blocks=1 00:19:40.879 --rc geninfo_unexecuted_blocks=1 00:19:40.879 00:19:40.879 ' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.879 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:40.880 Error setting digest 00:19:40.880 40F27AED657F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:40.880 40F27AED657F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.880 12:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.451 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.452 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.452 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.452 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.452 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:19:47.452 00:19:47.452 --- 10.0.0.2 ping statistics --- 00:19:47.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.452 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:19:47.452 00:19:47.452 --- 10.0.0.1 ping statistics --- 00:19:47.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.452 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=470359 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 470359 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 470359 ']' 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.452 12:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.452 [2024-11-20 12:29:29.919558] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:47.452 [2024-11-20 12:29:29.919613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.452 [2024-11-20 12:29:29.999905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.452 [2024-11-20 12:29:30.050675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.452 [2024-11-20 12:29:30.050714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.452 [2024-11-20 12:29:30.050721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.452 [2024-11-20 12:29:30.050727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.452 [2024-11-20 12:29:30.050733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.452 [2024-11-20 12:29:30.051292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.1AX 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.1AX 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.1AX 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.1AX 00:19:47.712 12:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.971 [2024-11-20 12:29:30.962607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.971 [2024-11-20 12:29:30.978617] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.971 [2024-11-20 12:29:30.978799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.971 malloc0 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=470609 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 470609 /var/tmp/bdevperf.sock 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 470609 ']' 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.971 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.230 [2024-11-20 12:29:31.106708] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:19:48.230 [2024-11-20 12:29:31.106758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470609 ] 00:19:48.230 [2024-11-20 12:29:31.182349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.230 [2024-11-20 12:29:31.222707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.167 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.167 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:49.167 12:29:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.1AX 00:19:49.167 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.167 [2024-11-20 12:29:32.279971] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.426 TLSTESTn1 00:19:49.426 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.426 Running I/O for 10 seconds... 00:19:51.742 5203.00 IOPS, 20.32 MiB/s [2024-11-20T11:29:35.795Z] 5318.50 IOPS, 20.78 MiB/s [2024-11-20T11:29:36.731Z] 5362.00 IOPS, 20.95 MiB/s [2024-11-20T11:29:37.668Z] 5320.75 IOPS, 20.78 MiB/s [2024-11-20T11:29:38.604Z] 5342.80 IOPS, 20.87 MiB/s [2024-11-20T11:29:39.542Z] 5380.33 IOPS, 21.02 MiB/s [2024-11-20T11:29:40.477Z] 5382.43 IOPS, 21.03 MiB/s [2024-11-20T11:29:41.854Z] 5384.75 IOPS, 21.03 MiB/s [2024-11-20T11:29:42.791Z] 5397.44 IOPS, 21.08 MiB/s [2024-11-20T11:29:42.791Z] 5400.70 IOPS, 21.10 MiB/s 00:19:59.675 Latency(us) 00:19:59.675 [2024-11-20T11:29:42.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.675 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.675 Verification LBA range: start 0x0 length 0x2000 00:19:59.675 TLSTESTn1 : 10.01 5405.95 21.12 0.00 0.00 23642.80 5442.34 23023.08 00:19:59.675 [2024-11-20T11:29:42.791Z] =================================================================================================================== 00:19:59.675 [2024-11-20T11:29:42.791Z] Total : 5405.95 21.12 0.00 0.00 23642.80 5442.34 23023.08 00:19:59.675 { 00:19:59.675 "results": [ 00:19:59.675 { 00:19:59.675 "job": "TLSTESTn1", 00:19:59.675 "core_mask": "0x4", 00:19:59.675 "workload": "verify", 00:19:59.675 "status": "finished", 00:19:59.675 "verify_range": { 00:19:59.675 "start": 0, 00:19:59.675 "length": 8192 00:19:59.675 }, 00:19:59.675 "queue_depth": 128, 00:19:59.675 "io_size": 4096, 00:19:59.675 "runtime": 10.013783, 00:19:59.675 "iops": 5405.948980520149, 00:19:59.675 "mibps": 21.116988205156833, 00:19:59.675 "io_failed": 0, 00:19:59.675 "io_timeout": 0, 00:19:59.675 "avg_latency_us": 23642.802240752015, 00:19:59.675 "min_latency_us": 5442.337391304348, 00:19:59.676 "max_latency_us": 23023.081739130434 00:19:59.676 } 00:19:59.676 ], 00:19:59.676 "core_count": 1 00:19:59.676 } 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:59.676 nvmf_trace.0 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 470609 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 470609 ']' 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 470609 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470609 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470609' 00:19:59.676 killing process with pid 470609 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 470609 00:19:59.676 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.676 00:19:59.676 Latency(us) 00:19:59.676 [2024-11-20T11:29:42.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.676 [2024-11-20T11:29:42.792Z] =================================================================================================================== 00:19:59.676 [2024-11-20T11:29:42.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.676 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 470609 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.935 rmmod nvme_tcp 00:19:59.935 rmmod nvme_fabrics 00:19:59.935 rmmod nvme_keyring 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 470359 ']' 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 470359 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 470359 ']' 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 470359 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470359 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470359' 00:19:59.935 killing process with pid 470359 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 470359 00:19:59.935 12:29:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 470359 00:20:00.194 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.195 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.1AX 00:20:02.101 00:20:02.101 real 0m21.676s 00:20:02.101 user 0m23.541s 00:20:02.101 sys 0m9.509s 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:02.101 ************************************ 00:20:02.101 END TEST nvmf_fips 00:20:02.101 ************************************ 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.101 12:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:02.361 ************************************ 00:20:02.361 START TEST nvmf_control_msg_list 00:20:02.361 ************************************ 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:02.361 * Looking for test storage... 00:20:02.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.361 --rc genhtml_branch_coverage=1 00:20:02.361 --rc genhtml_function_coverage=1 00:20:02.361 --rc genhtml_legend=1 00:20:02.361 --rc geninfo_all_blocks=1 00:20:02.361 --rc geninfo_unexecuted_blocks=1 00:20:02.361 00:20:02.361 ' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.361 --rc genhtml_branch_coverage=1 00:20:02.361 --rc genhtml_function_coverage=1 00:20:02.361 --rc genhtml_legend=1 00:20:02.361 --rc geninfo_all_blocks=1 00:20:02.361 --rc geninfo_unexecuted_blocks=1 00:20:02.361 00:20:02.361 ' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.361 --rc genhtml_branch_coverage=1 00:20:02.361 --rc genhtml_function_coverage=1 00:20:02.361 --rc genhtml_legend=1 00:20:02.361 --rc geninfo_all_blocks=1 00:20:02.361 --rc geninfo_unexecuted_blocks=1 00:20:02.361 00:20:02.361 ' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.361 --rc genhtml_branch_coverage=1 00:20:02.361 --rc genhtml_function_coverage=1 00:20:02.361 --rc genhtml_legend=1 00:20:02.361 --rc geninfo_all_blocks=1 00:20:02.361 --rc geninfo_unexecuted_blocks=1 00:20:02.361 00:20:02.361 ' 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.361 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:02.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.362 12:29:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.934 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.935 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.935 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:20:08.935 00:20:08.935 --- 10.0.0.2 ping statistics --- 00:20:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.935 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:08.935 00:20:08.935 --- 10.0.0.1 ping statistics --- 00:20:08.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.935 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=475980 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 475980 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 475980 ']' 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.935 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 [2024-11-20 12:29:51.428974] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:08.936 [2024-11-20 12:29:51.429027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.936 [2024-11-20 12:29:51.510296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.936 [2024-11-20 12:29:51.551567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.936 [2024-11-20 12:29:51.551603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.936 [2024-11-20 12:29:51.551610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.936 [2024-11-20 12:29:51.551616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.936 [2024-11-20 12:29:51.551621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.936 [2024-11-20 12:29:51.552235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 [2024-11-20 12:29:51.696904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 Malloc0 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:08.936 [2024-11-20 12:29:51.737384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=476081 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=476084 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=476087 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 476081 00:20:08.936 12:29:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.936 [2024-11-20 12:29:51.826161] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:08.936 [2024-11-20 12:29:51.826353] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:08.936 [2024-11-20 12:29:51.826587] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:09.873 Initializing NVMe Controllers 00:20:09.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:09.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:09.873 Initialization complete. Launching workers. 00:20:09.873 ======================================================== 00:20:09.873 Latency(us) 00:20:09.873 Device Information : IOPS MiB/s Average min max 00:20:09.873 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6404.00 25.02 155.80 127.69 923.55 00:20:09.873 ======================================================== 00:20:09.873 Total : 6404.00 25.02 155.80 127.69 923.55 00:20:09.873 00:20:09.873 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 476084 00:20:09.873 Initializing NVMe Controllers 00:20:09.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:09.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:09.873 Initialization complete. Launching workers. 00:20:09.873 ======================================================== 00:20:09.873 Latency(us) 00:20:09.873 Device Information : IOPS MiB/s Average min max 00:20:09.873 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6446.00 25.18 154.79 126.12 377.95 00:20:09.873 ======================================================== 00:20:09.873 Total : 6446.00 25.18 154.79 126.12 377.95 00:20:09.873 00:20:09.873 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 476087 00:20:10.133 Initializing NVMe Controllers 00:20:10.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:10.133 Initialization complete. Launching workers. 00:20:10.133 ======================================================== 00:20:10.133 Latency(us) 00:20:10.133 Device Information : IOPS MiB/s Average min max 00:20:10.133 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40982.53 40781.93 41857.25 00:20:10.133 ======================================================== 00:20:10.133 Total : 25.00 0.10 40982.53 40781.93 41857.25 00:20:10.133 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.133 rmmod nvme_tcp 00:20:10.133 rmmod nvme_fabrics 00:20:10.133 rmmod nvme_keyring 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 475980 ']' 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 475980 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 475980 ']' 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 475980 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475980 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475980' 00:20:10.133 killing process with pid 475980 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 475980 00:20:10.133 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 475980 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.393 12:29:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:12.298 00:20:12.298 real 0m10.107s 00:20:12.298 user 0m6.622s 00:20:12.298 sys 0m5.489s 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:12.298 ************************************ 00:20:12.298 END TEST nvmf_control_msg_list 00:20:12.298 ************************************ 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.298 12:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.558 ************************************ 00:20:12.558 START TEST nvmf_wait_for_buf 00:20:12.558 ************************************ 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:12.558 * Looking for test storage... 00:20:12.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.558 --rc genhtml_branch_coverage=1 00:20:12.558 --rc genhtml_function_coverage=1 00:20:12.558 --rc genhtml_legend=1 00:20:12.558 --rc geninfo_all_blocks=1 00:20:12.558 --rc geninfo_unexecuted_blocks=1 00:20:12.558 00:20:12.558 ' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.558 --rc genhtml_branch_coverage=1 00:20:12.558 --rc genhtml_function_coverage=1 00:20:12.558 --rc genhtml_legend=1 00:20:12.558 --rc geninfo_all_blocks=1 00:20:12.558 --rc geninfo_unexecuted_blocks=1 00:20:12.558 00:20:12.558 ' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.558 --rc genhtml_branch_coverage=1 00:20:12.558 --rc genhtml_function_coverage=1 00:20:12.558 --rc genhtml_legend=1 00:20:12.558 --rc geninfo_all_blocks=1 00:20:12.558 --rc geninfo_unexecuted_blocks=1 00:20:12.558 00:20:12.558 ' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.558 --rc genhtml_branch_coverage=1 00:20:12.558 --rc genhtml_function_coverage=1 00:20:12.558 --rc genhtml_legend=1 00:20:12.558 --rc geninfo_all_blocks=1 00:20:12.558 --rc geninfo_unexecuted_blocks=1 00:20:12.558 00:20:12.558 ' 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.558 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:12.559 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.130 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:19.131 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:19.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:19.131 Found net devices under 0000:86:00.0: cvl_0_0 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:19.131 Found net devices under 0000:86:00.1: cvl_0_1 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:20:19.131 00:20:19.131 --- 10.0.0.2 ping statistics --- 00:20:19.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.131 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:20:19.131 00:20:19.131 --- 10.0.0.1 ping statistics --- 00:20:19.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.131 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=479865 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 479865 00:20:19.131 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 479865 ']' 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 [2024-11-20 12:30:01.652013] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:19.132 [2024-11-20 12:30:01.652058] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.132 [2024-11-20 12:30:01.733468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.132 [2024-11-20 12:30:01.775184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.132 [2024-11-20 12:30:01.775220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.132 [2024-11-20 12:30:01.775228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.132 [2024-11-20 12:30:01.775234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.132 [2024-11-20 12:30:01.775243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.132 [2024-11-20 12:30:01.775826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 Malloc0 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 [2024-11-20 12:30:01.957060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:19.132 [2024-11-20 12:30:01.985242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.132 12:30:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:19.132 [2024-11-20 12:30:02.068020] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.511 Initializing NVMe Controllers 00:20:20.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:20.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:20.511 Initialization complete. Launching workers. 00:20:20.511 ======================================================== 00:20:20.511 Latency(us) 00:20:20.511 Device Information : IOPS MiB/s Average min max 00:20:20.511 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.54 16.07 32207.12 7256.29 63842.88 00:20:20.511 ======================================================== 00:20:20.511 Total : 128.54 16.07 32207.12 7256.29 63842.88 00:20:20.511 00:20:20.511 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:20.511 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:20.511 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.511 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.770 rmmod nvme_tcp 00:20:20.770 rmmod nvme_fabrics 00:20:20.770 rmmod nvme_keyring 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 479865 ']' 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 479865 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 479865 ']' 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 479865 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479865 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479865' 00:20:20.770 killing process with pid 479865 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 479865 00:20:20.770 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 479865 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.030 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.940 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:22.940 00:20:22.940 real 0m10.546s 00:20:22.940 user 0m4.016s 00:20:22.940 sys 0m5.004s 00:20:22.940 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.940 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:22.940 ************************************ 00:20:22.940 END TEST nvmf_wait_for_buf 00:20:22.940 ************************************ 00:20:22.940 12:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:22.940 12:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:22.940 12:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:22.940 12:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:22.940 12:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.940 12:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.511 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.511 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.512 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.512 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.512 ************************************ 00:20:29.512 START TEST nvmf_perf_adq 00:20:29.512 ************************************ 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:29.512 * Looking for test storage... 00:20:29.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.512 --rc genhtml_branch_coverage=1 00:20:29.512 --rc genhtml_function_coverage=1 00:20:29.512 --rc genhtml_legend=1 00:20:29.512 --rc geninfo_all_blocks=1 00:20:29.512 --rc geninfo_unexecuted_blocks=1 00:20:29.512 00:20:29.512 ' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.512 --rc genhtml_branch_coverage=1 00:20:29.512 --rc genhtml_function_coverage=1 00:20:29.512 --rc genhtml_legend=1 00:20:29.512 --rc geninfo_all_blocks=1 00:20:29.512 --rc geninfo_unexecuted_blocks=1 00:20:29.512 00:20:29.512 ' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:29.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.512 --rc genhtml_branch_coverage=1 00:20:29.512 --rc genhtml_function_coverage=1 00:20:29.512 --rc genhtml_legend=1 00:20:29.512 --rc geninfo_all_blocks=1 00:20:29.512 --rc geninfo_unexecuted_blocks=1 00:20:29.512 00:20:29.512 ' 00:20:29.512 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:29.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.513 --rc genhtml_branch_coverage=1 00:20:29.513 --rc genhtml_function_coverage=1 00:20:29.513 --rc genhtml_legend=1 00:20:29.513 --rc geninfo_all_blocks=1 00:20:29.513 --rc geninfo_unexecuted_blocks=1 00:20:29.513 00:20:29.513 ' 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:29.513 12:30:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.790 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.790 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.791 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.791 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.791 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:34.791 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:35.728 12:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:37.633 12:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.924 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.925 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.925 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.925 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.925 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:20:42.925 00:20:42.925 --- 10.0.0.2 ping statistics --- 00:20:42.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.925 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:20:42.925 00:20:42.925 --- 10.0.0.1 ping statistics --- 00:20:42.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.925 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:42.925 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=488613 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 488613 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 488613 ']' 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.926 12:30:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.926 [2024-11-20 12:30:25.898673] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:20:42.926 [2024-11-20 12:30:25.898725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.926 [2024-11-20 12:30:25.981342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.926 [2024-11-20 12:30:26.025541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.926 [2024-11-20 12:30:26.025579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.926 [2024-11-20 12:30:26.025587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.926 [2024-11-20 12:30:26.025592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.926 [2024-11-20 12:30:26.025597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.926 [2024-11-20 12:30:26.027186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.926 [2024-11-20 12:30:26.027298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.926 [2024-11-20 12:30:26.027408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.926 [2024-11-20 12:30:26.027410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 [2024-11-20 12:30:26.228224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 Malloc1 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 [2024-11-20 12:30:26.293248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=488849 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:43.185 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:45.718 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:45.718 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.718 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.718 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.718 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:45.718 "tick_rate": 2300000000, 00:20:45.718 "poll_groups": [ 00:20:45.718 { 00:20:45.718 "name": "nvmf_tgt_poll_group_000", 00:20:45.718 "admin_qpairs": 1, 00:20:45.718 "io_qpairs": 1, 00:20:45.718 "current_admin_qpairs": 1, 00:20:45.718 "current_io_qpairs": 1, 00:20:45.718 "pending_bdev_io": 0, 00:20:45.718 "completed_nvme_io": 19889, 00:20:45.718 "transports": [ 00:20:45.718 { 00:20:45.718 "trtype": "TCP" 00:20:45.718 } 00:20:45.718 ] 00:20:45.718 }, 00:20:45.718 { 00:20:45.718 "name": "nvmf_tgt_poll_group_001", 00:20:45.718 "admin_qpairs": 0, 00:20:45.718 "io_qpairs": 1, 00:20:45.718 "current_admin_qpairs": 0, 00:20:45.718 "current_io_qpairs": 1, 00:20:45.718 "pending_bdev_io": 0, 00:20:45.718 "completed_nvme_io": 19986, 00:20:45.718 "transports": [ 00:20:45.718 { 00:20:45.718 "trtype": "TCP" 00:20:45.718 } 00:20:45.718 ] 00:20:45.718 }, 00:20:45.718 { 00:20:45.718 "name": "nvmf_tgt_poll_group_002", 00:20:45.718 "admin_qpairs": 0, 00:20:45.718 "io_qpairs": 1, 00:20:45.718 "current_admin_qpairs": 0, 00:20:45.718 "current_io_qpairs": 1, 00:20:45.718 "pending_bdev_io": 0, 00:20:45.718 "completed_nvme_io": 20010, 00:20:45.718 "transports": [ 00:20:45.718 { 00:20:45.718 "trtype": "TCP" 00:20:45.719 } 00:20:45.719 ] 00:20:45.719 }, 00:20:45.719 { 00:20:45.719 "name": "nvmf_tgt_poll_group_003", 00:20:45.719 "admin_qpairs": 0, 00:20:45.719 "io_qpairs": 1, 00:20:45.719 "current_admin_qpairs": 0, 00:20:45.719 "current_io_qpairs": 1, 00:20:45.719 "pending_bdev_io": 0, 00:20:45.719 "completed_nvme_io": 19955, 00:20:45.719 "transports": [ 00:20:45.719 { 00:20:45.719 "trtype": "TCP" 00:20:45.719 } 00:20:45.719 ] 00:20:45.719 } 00:20:45.719 ] 00:20:45.719 }' 00:20:45.719 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:45.719 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:45.719 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:45.719 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:45.719 12:30:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 488849 00:20:53.842 Initializing NVMe Controllers 00:20:53.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:53.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:53.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:53.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:53.842 Initialization complete. Launching workers. 00:20:53.842 ======================================================== 00:20:53.842 Latency(us) 00:20:53.842 Device Information : IOPS MiB/s Average min max 00:20:53.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10547.60 41.20 6069.15 2276.32 10807.58 00:20:53.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10691.30 41.76 5985.82 1435.11 10413.77 00:20:53.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10652.30 41.61 6009.57 1828.98 10254.50 00:20:53.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10520.60 41.10 6083.58 2370.83 10210.51 00:20:53.842 ======================================================== 00:20:53.842 Total : 42411.80 165.67 6036.76 1435.11 10807.58 00:20:53.842 00:20:53.842 [2024-11-20 12:30:36.451193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f28520 is same with the state(6) to be set 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.842 rmmod nvme_tcp 00:20:53.842 rmmod nvme_fabrics 00:20:53.842 rmmod nvme_keyring 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 488613 ']' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 488613 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 488613 ']' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 488613 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488613 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488613' 00:20:53.842 killing process with pid 488613 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 488613 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 488613 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.842 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.791 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.791 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:55.791 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:55.791 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:57.170 12:30:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:59.074 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.351 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.352 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.352 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.352 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.352 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:21:04.352 00:21:04.352 --- 10.0.0.2 ping statistics --- 00:21:04.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.352 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:21:04.353 00:21:04.353 --- 10.0.0.1 ping statistics --- 00:21:04.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.353 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:04.353 net.core.busy_poll = 1 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:04.353 net.core.busy_read = 1 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=492524 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 492524 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 492524 ']' 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.353 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.353 [2024-11-20 12:30:47.463808] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:04.353 [2024-11-20 12:30:47.463857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.613 [2024-11-20 12:30:47.542924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.613 [2024-11-20 12:30:47.587155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.613 [2024-11-20 12:30:47.587197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.613 [2024-11-20 12:30:47.587204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.613 [2024-11-20 12:30:47.587211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.613 [2024-11-20 12:30:47.587216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.613 [2024-11-20 12:30:47.588804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.613 [2024-11-20 12:30:47.588911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.613 [2024-11-20 12:30:47.589019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.613 [2024-11-20 12:30:47.589020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.613 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.873 [2024-11-20 12:30:47.803393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.873 Malloc1 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.873 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.874 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.874 [2024-11-20 12:30:47.869260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.874 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.874 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=492662 00:21:04.874 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:04.874 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:06.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:06.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.777 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:07.036 "tick_rate": 2300000000, 00:21:07.036 "poll_groups": [ 00:21:07.036 { 00:21:07.036 "name": "nvmf_tgt_poll_group_000", 00:21:07.036 "admin_qpairs": 1, 00:21:07.036 "io_qpairs": 3, 00:21:07.036 "current_admin_qpairs": 1, 00:21:07.036 "current_io_qpairs": 3, 00:21:07.036 "pending_bdev_io": 0, 00:21:07.036 "completed_nvme_io": 30284, 00:21:07.036 "transports": [ 00:21:07.036 { 00:21:07.036 "trtype": "TCP" 00:21:07.036 } 00:21:07.036 ] 00:21:07.036 }, 00:21:07.036 { 00:21:07.036 "name": "nvmf_tgt_poll_group_001", 00:21:07.036 "admin_qpairs": 0, 00:21:07.036 "io_qpairs": 1, 00:21:07.036 "current_admin_qpairs": 0, 00:21:07.036 "current_io_qpairs": 1, 00:21:07.036 "pending_bdev_io": 0, 00:21:07.036 "completed_nvme_io": 25596, 00:21:07.036 "transports": [ 00:21:07.036 { 00:21:07.036 "trtype": "TCP" 00:21:07.036 } 00:21:07.036 ] 00:21:07.036 }, 00:21:07.036 { 00:21:07.036 "name": "nvmf_tgt_poll_group_002", 00:21:07.036 "admin_qpairs": 0, 00:21:07.036 "io_qpairs": 0, 00:21:07.036 "current_admin_qpairs": 0, 00:21:07.036 "current_io_qpairs": 0, 00:21:07.036 "pending_bdev_io": 0, 00:21:07.036 "completed_nvme_io": 0, 00:21:07.036 "transports": [ 00:21:07.036 { 00:21:07.036 "trtype": "TCP" 00:21:07.036 } 00:21:07.036 ] 00:21:07.036 }, 00:21:07.036 { 00:21:07.036 "name": "nvmf_tgt_poll_group_003", 00:21:07.036 "admin_qpairs": 0, 00:21:07.036 "io_qpairs": 0, 00:21:07.036 "current_admin_qpairs": 0, 00:21:07.036 "current_io_qpairs": 0, 00:21:07.036 "pending_bdev_io": 0, 00:21:07.036 "completed_nvme_io": 0, 00:21:07.036 "transports": [ 00:21:07.036 { 00:21:07.036 "trtype": "TCP" 00:21:07.036 } 00:21:07.036 ] 00:21:07.036 } 00:21:07.036 ] 00:21:07.036 }' 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:07.036 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 492662 00:21:15.157 Initializing NVMe Controllers 00:21:15.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:15.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:15.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:15.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:15.157 Initialization complete. Launching workers. 00:21:15.157 ======================================================== 00:21:15.157 Latency(us) 00:21:15.157 Device Information : IOPS MiB/s Average min max 00:21:15.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5373.93 20.99 11912.37 1511.87 59980.04 00:21:15.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13963.60 54.55 4582.45 1554.22 46532.89 00:21:15.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5203.64 20.33 12297.99 1503.84 60154.95 00:21:15.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5260.24 20.55 12177.84 1635.47 59512.76 00:21:15.157 ======================================================== 00:21:15.157 Total : 29801.41 116.41 8592.09 1503.84 60154.95 00:21:15.157 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.157 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.157 rmmod nvme_tcp 00:21:15.157 rmmod nvme_fabrics 00:21:15.157 rmmod nvme_keyring 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 492524 ']' 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 492524 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 492524 ']' 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 492524 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492524 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492524' 00:21:15.158 killing process with pid 492524 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 492524 00:21:15.158 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 492524 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.418 12:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:18.709 00:21:18.709 real 0m49.796s 00:21:18.709 user 2m43.800s 00:21:18.709 sys 0m10.518s 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.709 ************************************ 00:21:18.709 END TEST nvmf_perf_adq 00:21:18.709 ************************************ 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:18.709 ************************************ 00:21:18.709 START TEST nvmf_shutdown 00:21:18.709 ************************************ 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:18.709 * Looking for test storage... 00:21:18.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.709 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:18.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.709 --rc genhtml_branch_coverage=1 00:21:18.709 --rc genhtml_function_coverage=1 00:21:18.709 --rc genhtml_legend=1 00:21:18.709 --rc geninfo_all_blocks=1 00:21:18.709 --rc geninfo_unexecuted_blocks=1 00:21:18.709 00:21:18.709 ' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.710 --rc genhtml_branch_coverage=1 00:21:18.710 --rc genhtml_function_coverage=1 00:21:18.710 --rc genhtml_legend=1 00:21:18.710 --rc geninfo_all_blocks=1 00:21:18.710 --rc geninfo_unexecuted_blocks=1 00:21:18.710 00:21:18.710 ' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.710 --rc genhtml_branch_coverage=1 00:21:18.710 --rc genhtml_function_coverage=1 00:21:18.710 --rc genhtml_legend=1 00:21:18.710 --rc geninfo_all_blocks=1 00:21:18.710 --rc geninfo_unexecuted_blocks=1 00:21:18.710 00:21:18.710 ' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:18.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.710 --rc genhtml_branch_coverage=1 00:21:18.710 --rc genhtml_function_coverage=1 00:21:18.710 --rc genhtml_legend=1 00:21:18.710 --rc geninfo_all_blocks=1 00:21:18.710 --rc geninfo_unexecuted_blocks=1 00:21:18.710 00:21:18.710 ' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:18.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:18.710 ************************************ 00:21:18.710 START TEST nvmf_shutdown_tc1 00:21:18.710 ************************************ 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.710 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.711 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.711 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.711 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:18.711 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:18.711 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.711 12:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.474 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.474 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.474 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.474 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.474 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:21:25.475 00:21:25.475 --- 10.0.0.2 ping statistics --- 00:21:25.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.475 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:21:25.475 00:21:25.475 --- 10.0.0.1 ping statistics --- 00:21:25.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.475 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=498120 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 498120 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 498120 ']' 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.475 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.475 [2024-11-20 12:31:07.868834] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:25.475 [2024-11-20 12:31:07.868882] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.475 [2024-11-20 12:31:07.949461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.475 [2024-11-20 12:31:07.992631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.475 [2024-11-20 12:31:07.992666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.475 [2024-11-20 12:31:07.992673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.475 [2024-11-20 12:31:07.992679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.475 [2024-11-20 12:31:07.992685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.475 [2024-11-20 12:31:07.996966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.475 [2024-11-20 12:31:07.997066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.475 [2024-11-20 12:31:07.997174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.475 [2024-11-20 12:31:07.997175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.735 [2024-11-20 12:31:08.764335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.735 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.994 Malloc1 00:21:25.994 [2024-11-20 12:31:08.888122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.994 Malloc2 00:21:25.994 Malloc3 00:21:25.994 Malloc4 00:21:25.994 Malloc5 00:21:25.994 Malloc6 00:21:26.253 Malloc7 00:21:26.253 Malloc8 00:21:26.253 Malloc9 00:21:26.253 Malloc10 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=498399 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 498399 /var/tmp/bdevperf.sock 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 498399 ']' 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.253 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.253 { 00:21:26.253 "params": { 00:21:26.253 "name": "Nvme$subsystem", 00:21:26.253 "trtype": "$TEST_TRANSPORT", 00:21:26.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.253 "adrfam": "ipv4", 00:21:26.253 "trsvcid": "$NVMF_PORT", 00:21:26.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.253 "hdgst": ${hdgst:-false}, 00:21:26.253 "ddgst": ${ddgst:-false} 00:21:26.253 }, 00:21:26.253 "method": "bdev_nvme_attach_controller" 00:21:26.253 } 00:21:26.253 EOF 00:21:26.253 )") 00:21:26.512 [2024-11-20 12:31:09.368907] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:26.512 [2024-11-20 12:31:09.368961] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.512 { 00:21:26.512 "params": { 00:21:26.512 "name": "Nvme$subsystem", 00:21:26.512 "trtype": "$TEST_TRANSPORT", 00:21:26.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.512 "adrfam": "ipv4", 00:21:26.512 "trsvcid": "$NVMF_PORT", 00:21:26.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.512 "hdgst": ${hdgst:-false}, 00:21:26.512 "ddgst": ${ddgst:-false} 00:21:26.512 }, 00:21:26.512 "method": "bdev_nvme_attach_controller" 00:21:26.512 } 00:21:26.512 EOF 00:21:26.512 )") 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.512 { 00:21:26.512 "params": { 00:21:26.512 "name": "Nvme$subsystem", 00:21:26.512 "trtype": "$TEST_TRANSPORT", 00:21:26.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.512 "adrfam": "ipv4", 00:21:26.512 "trsvcid": "$NVMF_PORT", 00:21:26.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.512 "hdgst": ${hdgst:-false}, 00:21:26.512 "ddgst": ${ddgst:-false} 00:21:26.512 }, 00:21:26.512 "method": "bdev_nvme_attach_controller" 00:21:26.512 } 00:21:26.512 EOF 00:21:26.512 )") 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.512 { 00:21:26.512 "params": { 00:21:26.512 "name": "Nvme$subsystem", 00:21:26.512 "trtype": "$TEST_TRANSPORT", 00:21:26.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.512 "adrfam": "ipv4", 00:21:26.512 "trsvcid": "$NVMF_PORT", 00:21:26.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.512 "hdgst": ${hdgst:-false}, 00:21:26.512 "ddgst": ${ddgst:-false} 00:21:26.512 }, 00:21:26.512 "method": "bdev_nvme_attach_controller" 00:21:26.512 } 00:21:26.512 EOF 00:21:26.512 )") 00:21:26.512 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.513 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:26.513 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:26.513 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme1", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme2", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme3", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme4", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme5", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme6", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme7", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme8", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme9", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 },{ 00:21:26.513 "params": { 00:21:26.513 "name": "Nvme10", 00:21:26.513 "trtype": "tcp", 00:21:26.513 "traddr": "10.0.0.2", 00:21:26.513 "adrfam": "ipv4", 00:21:26.513 "trsvcid": "4420", 00:21:26.513 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:26.513 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:26.513 "hdgst": false, 00:21:26.513 "ddgst": false 00:21:26.513 }, 00:21:26.513 "method": "bdev_nvme_attach_controller" 00:21:26.513 }' 00:21:26.513 [2024-11-20 12:31:09.445831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.513 [2024-11-20 12:31:09.487246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 498399 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:28.425 12:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:29.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 498399 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 498120 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.361 { 00:21:29.361 "params": { 00:21:29.361 "name": "Nvme$subsystem", 00:21:29.361 "trtype": "$TEST_TRANSPORT", 00:21:29.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.361 "adrfam": "ipv4", 00:21:29.361 "trsvcid": "$NVMF_PORT", 00:21:29.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.361 "hdgst": ${hdgst:-false}, 00:21:29.361 "ddgst": ${ddgst:-false} 00:21:29.361 }, 00:21:29.361 "method": "bdev_nvme_attach_controller" 00:21:29.361 } 00:21:29.361 EOF 00:21:29.361 )") 00:21:29.361 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 [2024-11-20 12:31:12.296975] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:29.362 [2024-11-20 12:31:12.297026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498891 ] 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.362 { 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme$subsystem", 00:21:29.362 "trtype": "$TEST_TRANSPORT", 00:21:29.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "$NVMF_PORT", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.362 "hdgst": ${hdgst:-false}, 00:21:29.362 "ddgst": ${ddgst:-false} 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 } 00:21:29.362 EOF 00:21:29.362 )") 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.362 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme1", 00:21:29.362 "trtype": "tcp", 00:21:29.362 "traddr": "10.0.0.2", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "4420", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.362 "hdgst": false, 00:21:29.362 "ddgst": false 00:21:29.362 }, 00:21:29.362 "method": "bdev_nvme_attach_controller" 00:21:29.362 },{ 00:21:29.362 "params": { 00:21:29.362 "name": "Nvme2", 00:21:29.362 "trtype": "tcp", 00:21:29.362 "traddr": "10.0.0.2", 00:21:29.362 "adrfam": "ipv4", 00:21:29.362 "trsvcid": "4420", 00:21:29.362 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.362 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.362 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme3", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme4", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme5", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme6", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme7", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme8", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme9", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 },{ 00:21:29.363 "params": { 00:21:29.363 "name": "Nvme10", 00:21:29.363 "trtype": "tcp", 00:21:29.363 "traddr": "10.0.0.2", 00:21:29.363 "adrfam": "ipv4", 00:21:29.363 "trsvcid": "4420", 00:21:29.363 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.363 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.363 "hdgst": false, 00:21:29.363 "ddgst": false 00:21:29.363 }, 00:21:29.363 "method": "bdev_nvme_attach_controller" 00:21:29.363 }' 00:21:29.363 [2024-11-20 12:31:12.374480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.363 [2024-11-20 12:31:12.415699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.741 Running I/O for 1 seconds... 00:21:31.937 2180.00 IOPS, 136.25 MiB/s 00:21:31.937 Latency(us) 00:21:31.937 [2024-11-20T11:31:15.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.937 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.937 Verification LBA range: start 0x0 length 0x400 00:21:31.937 Nvme1n1 : 1.16 276.71 17.29 0.00 0.00 229132.20 16070.57 217921.45 00:21:31.937 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.937 Verification LBA range: start 0x0 length 0x400 00:21:31.937 Nvme2n1 : 1.07 239.47 14.97 0.00 0.00 260794.99 16754.42 227039.50 00:21:31.937 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.937 Verification LBA range: start 0x0 length 0x400 00:21:31.937 Nvme3n1 : 1.14 280.00 17.50 0.00 0.00 219238.93 13506.11 216097.84 00:21:31.937 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.937 Verification LBA range: start 0x0 length 0x400 00:21:31.937 Nvme4n1 : 1.12 285.53 17.85 0.00 0.00 212529.86 16982.37 204244.37 00:21:31.937 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.937 Verification LBA range: start 0x0 length 0x400 00:21:31.937 Nvme5n1 : 1.17 274.00 17.12 0.00 0.00 216914.19 17210.32 217009.64 00:21:31.937 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.937 Verification LBA range: start 0x0 length 0x400 00:21:31.937 Nvme6n1 : 1.16 281.95 17.62 0.00 0.00 208084.36 5185.89 215186.03 00:21:31.938 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.938 Verification LBA range: start 0x0 length 0x400 00:21:31.938 Nvme7n1 : 1.16 275.56 17.22 0.00 0.00 211153.25 14930.81 235245.75 00:21:31.938 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.938 Verification LBA range: start 0x0 length 0x400 00:21:31.938 Nvme8n1 : 1.17 273.34 17.08 0.00 0.00 209847.34 15386.71 220656.86 00:21:31.938 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.938 Verification LBA range: start 0x0 length 0x400 00:21:31.938 Nvme9n1 : 1.17 272.52 17.03 0.00 0.00 207455.81 19375.86 221568.67 00:21:31.938 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.938 Verification LBA range: start 0x0 length 0x400 00:21:31.938 Nvme10n1 : 1.18 271.80 16.99 0.00 0.00 204982.36 15614.66 238892.97 00:21:31.938 [2024-11-20T11:31:15.054Z] =================================================================================================================== 00:21:31.938 [2024-11-20T11:31:15.054Z] Total : 2730.88 170.68 0.00 0.00 217117.19 5185.89 238892.97 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.938 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.938 rmmod nvme_tcp 00:21:32.197 rmmod nvme_fabrics 00:21:32.197 rmmod nvme_keyring 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 498120 ']' 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 498120 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 498120 ']' 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 498120 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 498120 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 498120' 00:21:32.197 killing process with pid 498120 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 498120 00:21:32.197 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 498120 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.456 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.994 00:21:34.994 real 0m15.787s 00:21:34.994 user 0m35.935s 00:21:34.994 sys 0m5.877s 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 ************************************ 00:21:34.994 END TEST nvmf_shutdown_tc1 00:21:34.994 ************************************ 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 ************************************ 00:21:34.994 START TEST nvmf_shutdown_tc2 00:21:34.994 ************************************ 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:34.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:34.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:34.994 Found net devices under 0000:86:00.0: cvl_0_0 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.994 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:34.995 Found net devices under 0000:86:00.1: cvl_0_1 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:34.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:21:34.995 00:21:34.995 --- 10.0.0.2 ping statistics --- 00:21:34.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.995 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:21:34.995 00:21:34.995 --- 10.0.0.1 ping statistics --- 00:21:34.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.995 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=499910 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 499910 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 499910 ']' 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.995 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.995 [2024-11-20 12:31:18.032067] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:34.995 [2024-11-20 12:31:18.032111] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.254 [2024-11-20 12:31:18.111399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.254 [2024-11-20 12:31:18.153580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.254 [2024-11-20 12:31:18.153617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.254 [2024-11-20 12:31:18.153625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.254 [2024-11-20 12:31:18.153631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.255 [2024-11-20 12:31:18.153636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.255 [2024-11-20 12:31:18.155221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.255 [2024-11-20 12:31:18.155330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.255 [2024-11-20 12:31:18.155434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.255 [2024-11-20 12:31:18.155436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.255 [2024-11-20 12:31:18.291485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.255 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.514 Malloc1 00:21:35.514 [2024-11-20 12:31:18.402201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.514 Malloc2 00:21:35.514 Malloc3 00:21:35.514 Malloc4 00:21:35.514 Malloc5 00:21:35.514 Malloc6 00:21:35.773 Malloc7 00:21:35.773 Malloc8 00:21:35.773 Malloc9 00:21:35.773 Malloc10 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=500115 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 500115 /var/tmp/bdevperf.sock 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 500115 ']' 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:35.773 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.774 [2024-11-20 12:31:18.880932] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:35.774 [2024-11-20 12:31:18.880989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500115 ] 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.774 { 00:21:35.774 "params": { 00:21:35.774 "name": "Nvme$subsystem", 00:21:35.774 "trtype": "$TEST_TRANSPORT", 00:21:35.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.774 "adrfam": "ipv4", 00:21:35.774 "trsvcid": "$NVMF_PORT", 00:21:35.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.774 "hdgst": ${hdgst:-false}, 00:21:35.774 "ddgst": ${ddgst:-false} 00:21:35.774 }, 00:21:35.774 "method": "bdev_nvme_attach_controller" 00:21:35.774 } 00:21:35.774 EOF 00:21:35.774 )") 00:21:35.774 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.034 { 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme$subsystem", 00:21:36.034 "trtype": "$TEST_TRANSPORT", 00:21:36.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "$NVMF_PORT", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.034 "hdgst": ${hdgst:-false}, 00:21:36.034 "ddgst": ${ddgst:-false} 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 } 00:21:36.034 EOF 00:21:36.034 )") 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.034 { 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme$subsystem", 00:21:36.034 "trtype": "$TEST_TRANSPORT", 00:21:36.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "$NVMF_PORT", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.034 "hdgst": ${hdgst:-false}, 00:21:36.034 "ddgst": ${ddgst:-false} 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 } 00:21:36.034 EOF 00:21:36.034 )") 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:36.034 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme1", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme2", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme3", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme4", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme5", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme6", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.034 "params": { 00:21:36.034 "name": "Nvme7", 00:21:36.034 "trtype": "tcp", 00:21:36.034 "traddr": "10.0.0.2", 00:21:36.034 "adrfam": "ipv4", 00:21:36.034 "trsvcid": "4420", 00:21:36.034 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.034 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.034 "hdgst": false, 00:21:36.034 "ddgst": false 00:21:36.034 }, 00:21:36.034 "method": "bdev_nvme_attach_controller" 00:21:36.034 },{ 00:21:36.035 "params": { 00:21:36.035 "name": "Nvme8", 00:21:36.035 "trtype": "tcp", 00:21:36.035 "traddr": "10.0.0.2", 00:21:36.035 "adrfam": "ipv4", 00:21:36.035 "trsvcid": "4420", 00:21:36.035 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.035 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.035 "hdgst": false, 00:21:36.035 "ddgst": false 00:21:36.035 }, 00:21:36.035 "method": "bdev_nvme_attach_controller" 00:21:36.035 },{ 00:21:36.035 "params": { 00:21:36.035 "name": "Nvme9", 00:21:36.035 "trtype": "tcp", 00:21:36.035 "traddr": "10.0.0.2", 00:21:36.035 "adrfam": "ipv4", 00:21:36.035 "trsvcid": "4420", 00:21:36.035 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.035 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.035 "hdgst": false, 00:21:36.035 "ddgst": false 00:21:36.035 }, 00:21:36.035 "method": "bdev_nvme_attach_controller" 00:21:36.035 },{ 00:21:36.035 "params": { 00:21:36.035 "name": "Nvme10", 00:21:36.035 "trtype": "tcp", 00:21:36.035 "traddr": "10.0.0.2", 00:21:36.035 "adrfam": "ipv4", 00:21:36.035 "trsvcid": "4420", 00:21:36.035 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.035 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.035 "hdgst": false, 00:21:36.035 "ddgst": false 00:21:36.035 }, 00:21:36.035 "method": "bdev_nvme_attach_controller" 00:21:36.035 }' 00:21:36.035 [2024-11-20 12:31:18.960473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.035 [2024-11-20 12:31:19.001900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.410 Running I/O for 10 seconds... 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:37.669 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:37.928 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 500115 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 500115 ']' 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 500115 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 500115 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 500115' 00:21:38.187 killing process with pid 500115 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 500115 00:21:38.187 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 500115 00:21:38.187 Received shutdown signal, test time was about 0.733467 seconds 00:21:38.187 00:21:38.187 Latency(us) 00:21:38.187 [2024-11-20T11:31:21.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.188 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme1n1 : 0.70 278.53 17.41 0.00 0.00 224576.81 3903.67 219745.06 00:21:38.188 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme2n1 : 0.72 273.15 17.07 0.00 0.00 225078.07 2080.06 201508.95 00:21:38.188 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme3n1 : 0.71 269.85 16.87 0.00 0.00 222931.63 17438.27 214274.23 00:21:38.188 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme4n1 : 0.71 300.84 18.80 0.00 0.00 191283.72 6411.13 206979.78 00:21:38.188 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme5n1 : 0.72 266.63 16.66 0.00 0.00 215201.76 17894.18 235245.75 00:21:38.188 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme6n1 : 0.73 264.10 16.51 0.00 0.00 212278.17 29063.79 204244.37 00:21:38.188 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme7n1 : 0.70 273.77 17.11 0.00 0.00 198051.39 15614.66 216097.84 00:21:38.188 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme8n1 : 0.72 265.23 16.58 0.00 0.00 200636.03 14417.92 220656.86 00:21:38.188 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme9n1 : 0.73 263.05 16.44 0.00 0.00 197423.64 19033.93 225215.89 00:21:38.188 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.188 Verification LBA range: start 0x0 length 0x400 00:21:38.188 Nvme10n1 : 0.73 259.28 16.21 0.00 0.00 194859.67 17780.20 242540.19 00:21:38.188 [2024-11-20T11:31:21.304Z] =================================================================================================================== 00:21:38.188 [2024-11-20T11:31:21.304Z] Total : 2714.42 169.65 0.00 0.00 208131.21 2080.06 242540.19 00:21:38.447 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 499910 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.380 rmmod nvme_tcp 00:21:39.380 rmmod nvme_fabrics 00:21:39.380 rmmod nvme_keyring 00:21:39.380 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 499910 ']' 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 499910 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 499910 ']' 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 499910 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 499910 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 499910' 00:21:39.638 killing process with pid 499910 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 499910 00:21:39.638 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 499910 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.898 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.437 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:42.437 00:21:42.437 real 0m7.324s 00:21:42.437 user 0m21.498s 00:21:42.437 sys 0m1.273s 00:21:42.437 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.438 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:42.438 ************************************ 00:21:42.438 END TEST nvmf_shutdown_tc2 00:21:42.438 ************************************ 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:42.438 ************************************ 00:21:42.438 START TEST nvmf_shutdown_tc3 00:21:42.438 ************************************ 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.438 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.438 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.438 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.438 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.439 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:21:42.439 00:21:42.439 --- 10.0.0.2 ping statistics --- 00:21:42.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.439 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:21:42.439 00:21:42.439 --- 10.0.0.1 ping statistics --- 00:21:42.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.439 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=501236 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 501236 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 501236 ']' 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.439 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.439 [2024-11-20 12:31:25.446430] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:42.439 [2024-11-20 12:31:25.446476] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.439 [2024-11-20 12:31:25.511664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.699 [2024-11-20 12:31:25.554878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.699 [2024-11-20 12:31:25.554911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.699 [2024-11-20 12:31:25.554919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.699 [2024-11-20 12:31:25.554936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.699 [2024-11-20 12:31:25.554942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.699 [2024-11-20 12:31:25.556534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.699 [2024-11-20 12:31:25.556642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.699 [2024-11-20 12:31:25.556746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.699 [2024-11-20 12:31:25.556747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.699 [2024-11-20 12:31:25.705964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.699 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.700 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.700 Malloc1 00:21:42.959 [2024-11-20 12:31:25.817216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.959 Malloc2 00:21:42.959 Malloc3 00:21:42.959 Malloc4 00:21:42.959 Malloc5 00:21:42.959 Malloc6 00:21:42.959 Malloc7 00:21:43.221 Malloc8 00:21:43.221 Malloc9 00:21:43.221 Malloc10 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=501507 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 501507 /var/tmp/bdevperf.sock 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 501507 ']' 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.221 { 00:21:43.221 "params": { 00:21:43.221 "name": "Nvme$subsystem", 00:21:43.221 "trtype": "$TEST_TRANSPORT", 00:21:43.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.221 "adrfam": "ipv4", 00:21:43.221 "trsvcid": "$NVMF_PORT", 00:21:43.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.221 "hdgst": ${hdgst:-false}, 00:21:43.221 "ddgst": ${ddgst:-false} 00:21:43.221 }, 00:21:43.221 "method": "bdev_nvme_attach_controller" 00:21:43.221 } 00:21:43.221 EOF 00:21:43.221 )") 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.221 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.221 { 00:21:43.221 "params": { 00:21:43.221 "name": "Nvme$subsystem", 00:21:43.221 "trtype": "$TEST_TRANSPORT", 00:21:43.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.221 "adrfam": "ipv4", 00:21:43.221 "trsvcid": "$NVMF_PORT", 00:21:43.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.222 "hdgst": ${hdgst:-false}, 00:21:43.222 "ddgst": ${ddgst:-false} 00:21:43.222 }, 00:21:43.222 "method": "bdev_nvme_attach_controller" 00:21:43.222 } 00:21:43.222 EOF 00:21:43.222 )") 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.222 { 00:21:43.222 "params": { 00:21:43.222 "name": "Nvme$subsystem", 00:21:43.222 "trtype": "$TEST_TRANSPORT", 00:21:43.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.222 "adrfam": "ipv4", 00:21:43.222 "trsvcid": "$NVMF_PORT", 00:21:43.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.222 "hdgst": ${hdgst:-false}, 00:21:43.222 "ddgst": ${ddgst:-false} 00:21:43.222 }, 00:21:43.222 "method": "bdev_nvme_attach_controller" 00:21:43.222 } 00:21:43.222 EOF 00:21:43.222 )") 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.222 { 00:21:43.222 "params": { 00:21:43.222 "name": "Nvme$subsystem", 00:21:43.222 "trtype": "$TEST_TRANSPORT", 00:21:43.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.222 "adrfam": "ipv4", 00:21:43.222 "trsvcid": "$NVMF_PORT", 00:21:43.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.222 "hdgst": ${hdgst:-false}, 00:21:43.222 "ddgst": ${ddgst:-false} 00:21:43.222 }, 00:21:43.222 "method": "bdev_nvme_attach_controller" 00:21:43.222 } 00:21:43.222 EOF 00:21:43.222 )") 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.222 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.222 { 00:21:43.222 "params": { 00:21:43.222 "name": "Nvme$subsystem", 00:21:43.223 "trtype": "$TEST_TRANSPORT", 00:21:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "$NVMF_PORT", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.223 "hdgst": ${hdgst:-false}, 00:21:43.223 "ddgst": ${ddgst:-false} 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 } 00:21:43.223 EOF 00:21:43.223 )") 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.223 { 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme$subsystem", 00:21:43.223 "trtype": "$TEST_TRANSPORT", 00:21:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "$NVMF_PORT", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.223 "hdgst": ${hdgst:-false}, 00:21:43.223 "ddgst": ${ddgst:-false} 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 } 00:21:43.223 EOF 00:21:43.223 )") 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.223 { 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme$subsystem", 00:21:43.223 "trtype": "$TEST_TRANSPORT", 00:21:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "$NVMF_PORT", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.223 "hdgst": ${hdgst:-false}, 00:21:43.223 "ddgst": ${ddgst:-false} 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 } 00:21:43.223 EOF 00:21:43.223 )") 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.223 [2024-11-20 12:31:26.295089] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:43.223 [2024-11-20 12:31:26.295138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501507 ] 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.223 { 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme$subsystem", 00:21:43.223 "trtype": "$TEST_TRANSPORT", 00:21:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "$NVMF_PORT", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.223 "hdgst": ${hdgst:-false}, 00:21:43.223 "ddgst": ${ddgst:-false} 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 } 00:21:43.223 EOF 00:21:43.223 )") 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.223 { 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme$subsystem", 00:21:43.223 "trtype": "$TEST_TRANSPORT", 00:21:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "$NVMF_PORT", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.223 "hdgst": ${hdgst:-false}, 00:21:43.223 "ddgst": ${ddgst:-false} 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 } 00:21:43.223 EOF 00:21:43.223 )") 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:43.223 { 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme$subsystem", 00:21:43.223 "trtype": "$TEST_TRANSPORT", 00:21:43.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "$NVMF_PORT", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.223 "hdgst": ${hdgst:-false}, 00:21:43.223 "ddgst": ${ddgst:-false} 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 } 00:21:43.223 EOF 00:21:43.223 )") 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:43.223 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme1", 00:21:43.223 "trtype": "tcp", 00:21:43.223 "traddr": "10.0.0.2", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "4420", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.223 "hdgst": false, 00:21:43.223 "ddgst": false 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 },{ 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme2", 00:21:43.223 "trtype": "tcp", 00:21:43.223 "traddr": "10.0.0.2", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "4420", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.223 "hdgst": false, 00:21:43.223 "ddgst": false 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 },{ 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme3", 00:21:43.223 "trtype": "tcp", 00:21:43.223 "traddr": "10.0.0.2", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "4420", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.223 "hdgst": false, 00:21:43.223 "ddgst": false 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 },{ 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme4", 00:21:43.223 "trtype": "tcp", 00:21:43.223 "traddr": "10.0.0.2", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "4420", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.223 "hdgst": false, 00:21:43.223 "ddgst": false 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 },{ 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme5", 00:21:43.223 "trtype": "tcp", 00:21:43.223 "traddr": "10.0.0.2", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "4420", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.223 "hdgst": false, 00:21:43.223 "ddgst": false 00:21:43.223 }, 00:21:43.223 "method": "bdev_nvme_attach_controller" 00:21:43.223 },{ 00:21:43.223 "params": { 00:21:43.223 "name": "Nvme6", 00:21:43.223 "trtype": "tcp", 00:21:43.223 "traddr": "10.0.0.2", 00:21:43.223 "adrfam": "ipv4", 00:21:43.223 "trsvcid": "4420", 00:21:43.223 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.223 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.223 "hdgst": false, 00:21:43.223 "ddgst": false 00:21:43.223 }, 00:21:43.224 "method": "bdev_nvme_attach_controller" 00:21:43.224 },{ 00:21:43.224 "params": { 00:21:43.224 "name": "Nvme7", 00:21:43.224 "trtype": "tcp", 00:21:43.224 "traddr": "10.0.0.2", 00:21:43.224 "adrfam": "ipv4", 00:21:43.224 "trsvcid": "4420", 00:21:43.224 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.224 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.224 "hdgst": false, 00:21:43.224 "ddgst": false 00:21:43.224 }, 00:21:43.224 "method": "bdev_nvme_attach_controller" 00:21:43.224 },{ 00:21:43.224 "params": { 00:21:43.224 "name": "Nvme8", 00:21:43.224 "trtype": "tcp", 00:21:43.224 "traddr": "10.0.0.2", 00:21:43.224 "adrfam": "ipv4", 00:21:43.224 "trsvcid": "4420", 00:21:43.224 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.224 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.224 "hdgst": false, 00:21:43.224 "ddgst": false 00:21:43.224 }, 00:21:43.224 "method": "bdev_nvme_attach_controller" 00:21:43.224 },{ 00:21:43.224 "params": { 00:21:43.224 "name": "Nvme9", 00:21:43.224 "trtype": "tcp", 00:21:43.224 "traddr": "10.0.0.2", 00:21:43.224 "adrfam": "ipv4", 00:21:43.224 "trsvcid": "4420", 00:21:43.224 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.224 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.224 "hdgst": false, 00:21:43.224 "ddgst": false 00:21:43.224 }, 00:21:43.224 "method": "bdev_nvme_attach_controller" 00:21:43.224 },{ 00:21:43.224 "params": { 00:21:43.224 "name": "Nvme10", 00:21:43.224 "trtype": "tcp", 00:21:43.224 "traddr": "10.0.0.2", 00:21:43.224 "adrfam": "ipv4", 00:21:43.224 "trsvcid": "4420", 00:21:43.224 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.224 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.224 "hdgst": false, 00:21:43.224 "ddgst": false 00:21:43.224 }, 00:21:43.224 "method": "bdev_nvme_attach_controller" 00:21:43.224 }' 00:21:43.484 [2024-11-20 12:31:26.371946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.484 [2024-11-20 12:31:26.413454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.388 Running I/O for 10 seconds... 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.388 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.644 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.644 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:45.644 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:45.644 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 501236 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 501236 ']' 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 501236 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501236 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501236' 00:21:45.918 killing process with pid 501236 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 501236 00:21:45.918 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 501236 00:21:45.918 [2024-11-20 12:31:28.885218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.918 [2024-11-20 12:31:28.885424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.885673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463700 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.886999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.887175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466180 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.919 [2024-11-20 12:31:28.888685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.888768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463bf0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281e70 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3c1b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39c70 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.920 [2024-11-20 12:31:28.890935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.920 [2024-11-20 12:31:28.890942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3bd50 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.890995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.891256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24640c0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.920 [2024-11-20 12:31:28.892915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.892993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.893102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24645b0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.894571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464930 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.895214] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.921 [2024-11-20 12:31:28.895291] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.921 [2024-11-20 12:31:28.895342] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.921 [2024-11-20 12:31:28.900483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.921 [2024-11-20 12:31:28.900771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.900876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24652d0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.901998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.902055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24657c0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281320 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281e70 (9): Bad file descriptor 00:21:45.922 [2024-11-20 12:31:28.913171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f930 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50610 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d590 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a3e0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.922 [2024-11-20 12:31:28.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.913585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12667a0 is same with the state(6) to be set 00:21:45.922 [2024-11-20 12:31:28.913600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3c1b0 (9): Bad file descriptor 00:21:45.922 [2024-11-20 12:31:28.913615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe39c70 (9): Bad file descriptor 00:21:45.922 [2024-11-20 12:31:28.913629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bd50 (9): Bad file descriptor 00:21:45.922 [2024-11-20 12:31:28.913715] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.922 [2024-11-20 12:31:28.916055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.922 [2024-11-20 12:31:28.916205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.922 [2024-11-20 12:31:28.916214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.916985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.916993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.923 [2024-11-20 12:31:28.917564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.923 [2024-11-20 12:31:28.917573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.917988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.917995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.918360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c5a20 is same with the state(6) to be set 00:21:45.924 [2024-11-20 12:31:28.918535] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.924 [2024-11-20 12:31:28.919529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.924 [2024-11-20 12:31:28.919833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.924 [2024-11-20 12:31:28.919841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.919988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.919997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.920548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.920554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.921599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:45.925 [2024-11-20 12:31:28.921630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a3e0 (9): Bad file descriptor 00:21:45.925 [2024-11-20 12:31:28.921673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.921684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.921695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.921704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.921717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.921724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.921733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.921740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.921748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.921755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.925 [2024-11-20 12:31:28.921767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.925 [2024-11-20 12:31:28.921775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.921983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.921993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.922000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.922009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.922015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.926 [2024-11-20 12:31:28.929600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.926 [2024-11-20 12:31:28.929607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.929780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.929788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123ea10 is same with the state(6) to be set 00:21:45.927 [2024-11-20 12:31:28.930965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:45.927 [2024-11-20 12:31:28.930992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:45.927 [2024-11-20 12:31:28.931006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127f930 (9): Bad file descriptor 00:21:45.927 [2024-11-20 12:31:28.931017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50610 (9): Bad file descriptor 00:21:45.927 [2024-11-20 12:31:28.931051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281320 (9): Bad file descriptor 00:21:45.927 [2024-11-20 12:31:28.931078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125d590 (9): Bad file descriptor 00:21:45.927 [2024-11-20 12:31:28.931097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12667a0 (9): Bad file descriptor 00:21:45.927 [2024-11-20 12:31:28.932096] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.927 [2024-11-20 12:31:28.932649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:45.927 [2024-11-20 12:31:28.932841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.927 [2024-11-20 12:31:28.932861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a3e0 with addr=10.0.0.2, port=4420 00:21:45.927 [2024-11-20 12:31:28.932873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a3e0 is same with the state(6) to be set 00:21:45.927 [2024-11-20 12:31:28.932955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.932969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.932984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.932995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.927 [2024-11-20 12:31:28.933705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.927 [2024-11-20 12:31:28.933714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.933989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.933999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.934348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.934358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10404d0 is same with the state(6) to be set 00:21:45.928 [2024-11-20 12:31:28.935725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.935986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.935996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.928 [2024-11-20 12:31:28.936217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.928 [2024-11-20 12:31:28.936230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.936978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.936988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.937139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.937151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10416a0 is same with the state(6) to be set 00:21:45.929 [2024-11-20 12:31:28.938509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.938526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.938542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.938551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.938564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.938586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.938595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.929 [2024-11-20 12:31:28.938607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.929 [2024-11-20 12:31:28.938617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.938985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.938995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.930 [2024-11-20 12:31:28.939744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.930 [2024-11-20 12:31:28.939756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.939922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.939933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231750 is same with the state(6) to be set 00:21:45.931 [2024-11-20 12:31:28.941641] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.931 [2024-11-20 12:31:28.941727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.941985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.941997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.931 [2024-11-20 12:31:28.942639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.931 [2024-11-20 12:31:28.942647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.942985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.942993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.943001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c6ef0 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.943977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.932 [2024-11-20 12:31:28.943994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:45.932 [2024-11-20 12:31:28.944003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:45.932 [2024-11-20 12:31:28.944015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:45.932 [2024-11-20 12:31:28.944310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.944326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd50610 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.944335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50610 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.944511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.944522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127f930 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.944530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f930 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.944726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.944739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12667a0 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.944747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12667a0 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.944758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a3e0 (9): Bad file descriptor 00:21:45.932 [2024-11-20 12:31:28.944803] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:45.932 [2024-11-20 12:31:28.944816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12667a0 (9): Bad file descriptor 00:21:45.932 [2024-11-20 12:31:28.944829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127f930 (9): Bad file descriptor 00:21:45.932 [2024-11-20 12:31:28.944842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50610 (9): Bad file descriptor 00:21:45.932 [2024-11-20 12:31:28.945326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.945345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3c1b0 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.945354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3c1b0 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.945445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.945456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3bd50 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.945464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3bd50 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.945630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.945642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe39c70 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.945650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39c70 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.945750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.932 [2024-11-20 12:31:28.945760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281e70 with addr=10.0.0.2, port=4420 00:21:45.932 [2024-11-20 12:31:28.945769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281e70 is same with the state(6) to be set 00:21:45.932 [2024-11-20 12:31:28.945781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:45.932 [2024-11-20 12:31:28.945790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:45.932 [2024-11-20 12:31:28.945799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:45.932 [2024-11-20 12:31:28.945808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:45.932 [2024-11-20 12:31:28.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.932 [2024-11-20 12:31:28.946799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.932 [2024-11-20 12:31:28.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.946986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.946995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.933 [2024-11-20 12:31:28.947636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.933 [2024-11-20 12:31:28.947643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.947654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.947661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.947671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123fed0 is same with the state(6) to be set 00:21:45.934 [2024-11-20 12:31:28.948701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.948982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.948991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.934 [2024-11-20 12:31:28.949616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.934 [2024-11-20 12:31:28.949625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.935 [2024-11-20 12:31:28.949852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.935 [2024-11-20 12:31:28.949860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1243d80 is same with the state(6) to be set 00:21:45.935 [2024-11-20 12:31:28.951072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:45.935 task offset: 27904 on job bdev=Nvme6n1 fails 00:21:45.935 00:21:45.935 Latency(us) 00:21:45.935 [2024-11-20T11:31:29.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.935 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme1n1 ended in about 0.92 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme1n1 : 0.92 209.73 13.11 69.91 0.00 226588.49 17324.30 221568.67 00:21:45.935 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme2n1 : 0.92 209.09 13.07 69.70 0.00 223297.67 16640.45 203332.56 00:21:45.935 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme3n1 : 0.92 208.46 13.03 69.49 0.00 220042.69 16070.57 224304.08 00:21:45.935 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme4n1 ended in about 0.91 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme4n1 : 0.91 224.74 14.05 70.16 0.00 203536.15 15386.71 218833.25 00:21:45.935 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme5n1 ended in about 0.93 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme5n1 : 0.93 212.13 13.26 68.92 0.00 209949.16 9346.00 218833.25 00:21:45.935 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme6n1 ended in about 0.90 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme6n1 : 0.90 213.42 13.34 71.14 0.00 202823.01 15956.59 219745.06 00:21:45.935 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme7n1 : 0.91 210.76 13.17 70.25 0.00 201652.65 12936.24 221568.67 00:21:45.935 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme8n1 : 0.93 206.26 12.89 68.75 0.00 202762.46 16526.47 219745.06 00:21:45.935 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme9n1 ended in about 0.90 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme9n1 : 0.90 212.93 13.31 70.98 0.00 191603.53 7978.30 224304.08 00:21:45.935 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.935 Job: Nvme10n1 ended in about 0.92 seconds with error 00:21:45.935 Verification LBA range: start 0x0 length 0x400 00:21:45.935 Nvme10n1 : 0.92 143.94 9.00 69.26 0.00 251257.01 18692.01 246187.41 00:21:45.935 [2024-11-20T11:31:29.051Z] =================================================================================================================== 00:21:45.935 [2024-11-20T11:31:29.051Z] Total : 2051.48 128.22 698.56 0.00 212406.05 7978.30 246187.41 00:21:45.935 [2024-11-20 12:31:28.980096] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:45.935 [2024-11-20 12:31:28.980145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.980199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3c1b0 (9): Bad file descriptor 00:21:45.935 [2024-11-20 12:31:28.980214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bd50 (9): Bad file descriptor 00:21:45.935 [2024-11-20 12:31:28.980224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe39c70 (9): Bad file descriptor 00:21:45.935 [2024-11-20 12:31:28.980234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281e70 (9): Bad file descriptor 00:21:45.935 [2024-11-20 12:31:28.980244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.980251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.980260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.980268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.980284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.980290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.980297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.980305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.980313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.980319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.980327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.980333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.980855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.935 [2024-11-20 12:31:28.980881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x125d590 with addr=10.0.0.2, port=4420 00:21:45.935 [2024-11-20 12:31:28.980894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125d590 is same with the state(6) to be set 00:21:45.935 [2024-11-20 12:31:28.980985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.935 [2024-11-20 12:31:28.980998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281320 with addr=10.0.0.2, port=4420 00:21:45.935 [2024-11-20 12:31:28.981007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281320 is same with the state(6) to be set 00:21:45.935 [2024-11-20 12:31:28.981017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.981023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.981032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.981041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.981049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.981056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.981063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.981070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.981079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.981086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.981094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.981101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.981109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.981116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.981122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.981133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.981741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125d590 (9): Bad file descriptor 00:21:45.935 [2024-11-20 12:31:28.981760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281320 (9): Bad file descriptor 00:21:45.935 [2024-11-20 12:31:28.981803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.981816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.981827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.981836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.981844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.981855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:45.935 [2024-11-20 12:31:28.981893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:45.935 [2024-11-20 12:31:28.981902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:45.935 [2024-11-20 12:31:28.981909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:45.935 [2024-11-20 12:31:28.981917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:45.935 [2024-11-20 12:31:28.981925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.981932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.981940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.981952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.982208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:45.936 [2024-11-20 12:31:28.982221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.936 [2024-11-20 12:31:28.982413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.982430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127a3e0 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.982440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a3e0 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.982601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.982613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12667a0 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.982622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12667a0 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.982719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.982731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127f930 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.982741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127f930 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.982801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.982812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd50610 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.982824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd50610 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.982958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.982980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1281e70 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.982989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1281e70 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.983074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.983085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe39c70 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.983095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39c70 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.983266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.983280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3bd50 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.983289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3bd50 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.983451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.936 [2024-11-20 12:31:28.983464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3c1b0 with addr=10.0.0.2, port=4420 00:21:45.936 [2024-11-20 12:31:28.983473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3c1b0 is same with the state(6) to be set 00:21:45.936 [2024-11-20 12:31:28.983485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127a3e0 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12667a0 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127f930 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd50610 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1281e70 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe39c70 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bd50 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3c1b0 (9): Bad file descriptor 00:21:45.936 [2024-11-20 12:31:28.983583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:45.936 [2024-11-20 12:31:28.983815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:45.936 [2024-11-20 12:31:28.983822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:45.936 [2024-11-20 12:31:28.983828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:45.936 [2024-11-20 12:31:28.983835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:46.194 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 501507 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 501507 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 501507 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.570 rmmod nvme_tcp 00:21:47.570 rmmod nvme_fabrics 00:21:47.570 rmmod nvme_keyring 00:21:47.570 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 501236 ']' 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 501236 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 501236 ']' 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 501236 00:21:47.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (501236) - No such process 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 501236 is not found' 00:21:47.571 Process with pid 501236 is not found 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.571 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.476 00:21:49.476 real 0m7.384s 00:21:49.476 user 0m17.614s 00:21:49.476 sys 0m1.325s 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.476 ************************************ 00:21:49.476 END TEST nvmf_shutdown_tc3 00:21:49.476 ************************************ 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:49.476 ************************************ 00:21:49.476 START TEST nvmf_shutdown_tc4 00:21:49.476 ************************************ 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.476 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:49.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:49.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:49.477 Found net devices under 0000:86:00.0: cvl_0_0 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:49.477 Found net devices under 0000:86:00.1: cvl_0_1 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.477 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:21:49.737 00:21:49.737 --- 10.0.0.2 ping statistics --- 00:21:49.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.737 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:21:49.737 00:21:49.737 --- 10.0.0.1 ping statistics --- 00:21:49.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.737 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.737 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=502592 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 502592 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 502592 ']' 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.996 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.996 [2024-11-20 12:31:32.934834] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:21:49.996 [2024-11-20 12:31:32.934889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.996 [2024-11-20 12:31:33.014401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.996 [2024-11-20 12:31:33.058258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.996 [2024-11-20 12:31:33.058301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.996 [2024-11-20 12:31:33.058308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.996 [2024-11-20 12:31:33.058315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.996 [2024-11-20 12:31:33.058320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.996 [2024-11-20 12:31:33.060013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.996 [2024-11-20 12:31:33.060119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.996 [2024-11-20 12:31:33.060247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.996 [2024-11-20 12:31:33.060247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.934 [2024-11-20 12:31:33.799605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.934 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.935 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.935 Malloc1 00:21:50.935 [2024-11-20 12:31:33.906719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.935 Malloc2 00:21:50.935 Malloc3 00:21:50.935 Malloc4 00:21:51.194 Malloc5 00:21:51.194 Malloc6 00:21:51.194 Malloc7 00:21:51.194 Malloc8 00:21:51.194 Malloc9 00:21:51.194 Malloc10 00:21:51.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:51.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.194 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:51.453 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=502898 00:21:51.453 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:51.453 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:51.453 [2024-11-20 12:31:34.414739] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 502592 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 502592 ']' 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 502592 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 502592 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 502592' 00:21:56.736 killing process with pid 502592 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 502592 00:21:56.736 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 502592 00:21:56.736 [2024-11-20 12:31:39.403226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8fe0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8fe0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8fe0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c94b0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c94b0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c94b0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c94b0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.403892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c94b0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.404478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9980 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.404513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9980 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.404521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9980 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.404529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9980 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.404537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9980 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.404544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9980 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.405720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8b10 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.405748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8b10 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.405756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8b10 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.405763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8b10 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.405771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8b10 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.405777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c8b10 is same with the state(6) to be set 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.736 [2024-11-20 12:31:39.409479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.736 [2024-11-20 12:31:39.409546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7de0 is same with the state(6) to be set 00:21:56.736 [2024-11-20 12:31:39.409568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7de0 is same with the state(6) to be set 00:21:56.736 Write completed with error (sct=0, sc=8) 00:21:56.736 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 [2024-11-20 12:31:39.410112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 starting I/O failed: -6 00:21:56.737 [2024-11-20 12:31:39.410170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 [2024-11-20 12:31:39.410177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca320 is same with the state(6) to be set 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 [2024-11-20 12:31:39.410413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.737 [2024-11-20 12:31:39.410490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7f0 is same with the state(6) to be set 00:21:56.737 [2024-11-20 12:31:39.410512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7f0 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7f0 is same with the state(6) to be set 00:21:56.737 [2024-11-20 12:31:39.410527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7f0 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7f0 is same with the state(6) to be set 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 starting I/O failed: -6 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with Write completed with error (sct=0, sc=8) 00:21:56.737 the state(6) to be set 00:21:56.737 starting I/O failed: -6 00:21:56.737 [2024-11-20 12:31:39.410845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with starting I/O failed: -6 00:21:56.737 the state(6) to be set 00:21:56.737 [2024-11-20 12:31:39.410861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.737 [2024-11-20 12:31:39.410868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with Write completed with error (sct=0, sc=8) 00:21:56.737 the state(6) to be set 00:21:56.737 starting I/O failed: -6 00:21:56.737 [2024-11-20 12:31:39.410880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.737 [2024-11-20 12:31:39.410888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.737 Write completed with error (sct=0, sc=8) 00:21:56.737 [2024-11-20 12:31:39.410895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.738 [2024-11-20 12:31:39.410902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 [2024-11-20 12:31:39.410908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cacc0 is same with the state(6) to be set 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 [2024-11-20 12:31:39.411424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.738 Write completed with error (sct=0, sc=8) 00:21:56.738 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 [2024-11-20 12:31:39.412993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.739 NVMe io qpair process completion error 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 [2024-11-20 12:31:39.413942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 [2024-11-20 12:31:39.414591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 [2024-11-20 12:31:39.414607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 [2024-11-20 12:31:39.414615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 [2024-11-20 12:31:39.414623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 starting I/O failed: -6 00:21:56.739 [2024-11-20 12:31:39.414630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 [2024-11-20 12:31:39.414637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 [2024-11-20 12:31:39.414643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 [2024-11-20 12:31:39.414649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cbb30 is same with the state(6) to be set 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 [2024-11-20 12:31:39.414812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.739 starting I/O failed: -6 00:21:56.739 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 [2024-11-20 12:31:39.415843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.740 Write completed with error (sct=0, sc=8) 00:21:56.740 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 [2024-11-20 12:31:39.417810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.741 NVMe io qpair process completion error 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 [2024-11-20 12:31:39.418385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21333c0 is same with the state(6) to be set 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 [2024-11-20 12:31:39.418409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21333c0 is same with the state(6) to be set 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 [2024-11-20 12:31:39.418455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with Write completed with error (sct=0, sc=8) 00:21:56.741 the state(6) to be set 00:21:56.741 [2024-11-20 12:31:39.418478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with the state(6) to be set 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 [2024-11-20 12:31:39.418486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with the state(6) to be set 00:21:56.741 [2024-11-20 12:31:39.418493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with the state(6) to be set 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 [2024-11-20 12:31:39.418500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with the state(6) to be set 00:21:56.741 [2024-11-20 12:31:39.418507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with the state(6) to be set 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 [2024-11-20 12:31:39.418513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2132550 is same with the state(6) to be set 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 [2024-11-20 12:31:39.418850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 [2024-11-20 12:31:39.419765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.741 starting I/O failed: -6 00:21:56.741 starting I/O failed: -6 00:21:56.741 starting I/O failed: -6 00:21:56.741 starting I/O failed: -6 00:21:56.741 starting I/O failed: -6 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 starting I/O failed: -6 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.741 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 [2024-11-20 12:31:39.420966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.742 Write completed with error (sct=0, sc=8) 00:21:56.742 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 [2024-11-20 12:31:39.422753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.743 NVMe io qpair process completion error 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 [2024-11-20 12:31:39.423732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 [2024-11-20 12:31:39.424649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.743 Write completed with error (sct=0, sc=8) 00:21:56.743 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 [2024-11-20 12:31:39.425636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.744 Write completed with error (sct=0, sc=8) 00:21:56.744 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 [2024-11-20 12:31:39.427292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.745 NVMe io qpair process completion error 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 [2024-11-20 12:31:39.428448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 [2024-11-20 12:31:39.429384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.745 starting I/O failed: -6 00:21:56.745 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 [2024-11-20 12:31:39.430408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 [2024-11-20 12:31:39.433831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.746 NVMe io qpair process completion error 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.746 starting I/O failed: -6 00:21:56.746 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 [2024-11-20 12:31:39.434871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 [2024-11-20 12:31:39.435764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 [2024-11-20 12:31:39.436814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.747 Write completed with error (sct=0, sc=8) 00:21:56.747 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 [2024-11-20 12:31:39.441040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.748 NVMe io qpair process completion error 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 starting I/O failed: -6 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.748 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 [2024-11-20 12:31:39.442648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 Write completed with error (sct=0, sc=8) 00:21:56.749 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 [2024-11-20 12:31:39.443803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 [2024-11-20 12:31:39.445655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.750 NVMe io qpair process completion error 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.750 starting I/O failed: -6 00:21:56.750 Write completed with error (sct=0, sc=8) 00:21:56.751 [2024-11-20 12:31:39.446644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 [2024-11-20 12:31:39.447594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 [2024-11-20 12:31:39.448615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.751 starting I/O failed: -6 00:21:56.751 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 [2024-11-20 12:31:39.450458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.752 NVMe io qpair process completion error 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 Write completed with error (sct=0, sc=8) 00:21:56.752 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 [2024-11-20 12:31:39.451392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 [2024-11-20 12:31:39.452320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 [2024-11-20 12:31:39.453352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.753 Write completed with error (sct=0, sc=8) 00:21:56.753 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 [2024-11-20 12:31:39.458088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.754 NVMe io qpair process completion error 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 starting I/O failed: -6 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.754 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 [2024-11-20 12:31:39.459069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 [2024-11-20 12:31:39.460038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 [2024-11-20 12:31:39.461048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.755 Write completed with error (sct=0, sc=8) 00:21:56.755 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 Write completed with error (sct=0, sc=8) 00:21:56.756 starting I/O failed: -6 00:21:56.756 [2024-11-20 12:31:39.464195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:56.756 NVMe io qpair process completion error 00:21:56.756 Initializing NVMe Controllers 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:56.756 Controller IO queue size 128, less than required. 00:21:56.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:56.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:56.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:56.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:56.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:56.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:56.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:56.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:56.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:56.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:56.757 Initialization complete. Launching workers. 00:21:56.757 ======================================================== 00:21:56.757 Latency(us) 00:21:56.757 Device Information : IOPS MiB/s Average min max 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2144.47 92.15 59693.35 892.18 122659.95 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2116.18 90.93 59796.45 739.83 128421.55 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2128.27 91.45 59466.84 898.93 127855.31 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2114.67 90.86 59867.00 726.65 106311.39 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2109.92 90.66 60015.24 924.50 103693.65 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2120.71 91.12 59724.03 540.27 101596.35 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2134.54 91.72 59372.83 915.31 99482.40 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2106.89 90.53 60192.84 880.74 112588.34 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2130.65 91.55 59536.75 653.86 114158.73 00:21:56.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2154.19 92.56 58899.37 694.00 96518.24 00:21:56.757 ======================================================== 00:21:56.757 Total : 21260.49 913.54 59654.39 540.27 128421.55 00:21:56.757 00:21:56.757 [2024-11-20 12:31:39.467168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136b740 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c900 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136a560 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136b410 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136a890 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136ba70 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136aef0 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c720 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136cae0 is same with the state(6) to be set 00:21:56.757 [2024-11-20 12:31:39.467445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136abc0 is same with the state(6) to be set 00:21:56.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:56.757 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 502898 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 502898 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 502898 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.696 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.696 rmmod nvme_tcp 00:21:57.958 rmmod nvme_fabrics 00:21:57.958 rmmod nvme_keyring 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 502592 ']' 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 502592 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 502592 ']' 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 502592 00:21:57.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (502592) - No such process 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 502592 is not found' 00:21:57.958 Process with pid 502592 is not found 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.958 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.956 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.956 00:21:59.956 real 0m10.405s 00:21:59.956 user 0m27.523s 00:21:59.956 sys 0m5.128s 00:21:59.956 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.956 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:59.956 ************************************ 00:21:59.956 END TEST nvmf_shutdown_tc4 00:21:59.956 ************************************ 00:21:59.956 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:59.956 00:21:59.956 real 0m41.431s 00:21:59.956 user 1m42.796s 00:21:59.956 sys 0m13.943s 00:21:59.956 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.956 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.956 ************************************ 00:21:59.956 END TEST nvmf_shutdown 00:21:59.956 ************************************ 00:21:59.956 12:31:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:59.956 12:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.956 12:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.956 12:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.957 ************************************ 00:21:59.957 START TEST nvmf_nsid 00:21:59.957 ************************************ 00:21:59.957 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:00.230 * Looking for test storage... 00:22:00.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.230 --rc genhtml_branch_coverage=1 00:22:00.230 --rc genhtml_function_coverage=1 00:22:00.230 --rc genhtml_legend=1 00:22:00.230 --rc geninfo_all_blocks=1 00:22:00.230 --rc geninfo_unexecuted_blocks=1 00:22:00.230 00:22:00.230 ' 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.230 --rc genhtml_branch_coverage=1 00:22:00.230 --rc genhtml_function_coverage=1 00:22:00.230 --rc genhtml_legend=1 00:22:00.230 --rc geninfo_all_blocks=1 00:22:00.230 --rc geninfo_unexecuted_blocks=1 00:22:00.230 00:22:00.230 ' 00:22:00.230 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.230 --rc genhtml_branch_coverage=1 00:22:00.231 --rc genhtml_function_coverage=1 00:22:00.231 --rc genhtml_legend=1 00:22:00.231 --rc geninfo_all_blocks=1 00:22:00.231 --rc geninfo_unexecuted_blocks=1 00:22:00.231 00:22:00.231 ' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.231 --rc genhtml_branch_coverage=1 00:22:00.231 --rc genhtml_function_coverage=1 00:22:00.231 --rc genhtml_legend=1 00:22:00.231 --rc geninfo_all_blocks=1 00:22:00.231 --rc geninfo_unexecuted_blocks=1 00:22:00.231 00:22:00.231 ' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.231 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.815 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.816 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.816 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.816 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.816 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:22:06.817 00:22:06.817 --- 10.0.0.2 ping statistics --- 00:22:06.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.817 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:06.817 00:22:06.817 --- 10.0.0.1 ping statistics --- 00:22:06.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.817 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=507518 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 507518 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 507518 ']' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 [2024-11-20 12:31:49.221864] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:06.817 [2024-11-20 12:31:49.221909] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.817 [2024-11-20 12:31:49.300406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.817 [2024-11-20 12:31:49.341298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.817 [2024-11-20 12:31:49.341336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.817 [2024-11-20 12:31:49.341343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.817 [2024-11-20 12:31:49.341349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.817 [2024-11-20 12:31:49.341354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.817 [2024-11-20 12:31:49.341921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=507537 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ce377d0d-f88b-4e7d-a570-aa715a56b098 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0e60912f-dce2-44da-bcfe-29797d371ac1 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f319ca52-2e14-4395-9ced-fda11a532083 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 null0 00:22:06.817 null1 00:22:06.817 [2024-11-20 12:31:49.523265] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:06.817 [2024-11-20 12:31:49.523314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507537 ] 00:22:06.817 null2 00:22:06.817 [2024-11-20 12:31:49.528214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.817 [2024-11-20 12:31:49.552409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 507537 /var/tmp/tgt2.sock 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 507537 ']' 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:06.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 [2024-11-20 12:31:49.598125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.817 [2024-11-20 12:31:49.644261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:06.817 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:07.076 [2024-11-20 12:31:50.175280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.076 [2024-11-20 12:31:50.191397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:07.335 nvme0n1 nvme0n2 00:22:07.335 nvme1n1 00:22:07.335 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:07.335 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:07.335 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:08.271 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:09.207 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:09.207 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:09.207 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:09.207 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ce377d0d-f88b-4e7d-a570-aa715a56b098 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ce377d0df88b4e7da570aa715a56b098 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CE377D0DF88B4E7DA570AA715A56B098 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CE377D0DF88B4E7DA570AA715A56B098 == \C\E\3\7\7\D\0\D\F\8\8\B\4\E\7\D\A\5\7\0\A\A\7\1\5\A\5\6\B\0\9\8 ]] 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:09.466 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0e60912f-dce2-44da-bcfe-29797d371ac1 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0e60912fdce244dabcfe29797d371ac1 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0E60912FDCE244DABCFE29797D371AC1 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0E60912FDCE244DABCFE29797D371AC1 == \0\E\6\0\9\1\2\F\D\C\E\2\4\4\D\A\B\C\F\E\2\9\7\9\7\D\3\7\1\A\C\1 ]] 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f319ca52-2e14-4395-9ced-fda11a532083 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f319ca522e1443959cedfda11a532083 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F319CA522E1443959CEDFDA11A532083 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F319CA522E1443959CEDFDA11A532083 == \F\3\1\9\C\A\5\2\2\E\1\4\4\3\9\5\9\C\E\D\F\D\A\1\1\A\5\3\2\0\8\3 ]] 00:22:09.467 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 507537 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 507537 ']' 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 507537 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 507537 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 507537' 00:22:09.727 killing process with pid 507537 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 507537 00:22:09.727 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 507537 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.986 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.986 rmmod nvme_tcp 00:22:09.986 rmmod nvme_fabrics 00:22:09.986 rmmod nvme_keyring 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 507518 ']' 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 507518 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 507518 ']' 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 507518 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 507518 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 507518' 00:22:10.246 killing process with pid 507518 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 507518 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 507518 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:10.246 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:10.505 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.505 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.505 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.413 00:22:12.413 real 0m12.360s 00:22:12.413 user 0m9.713s 00:22:12.413 sys 0m5.415s 00:22:12.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.413 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:12.413 ************************************ 00:22:12.413 END TEST nvmf_nsid 00:22:12.413 ************************************ 00:22:12.413 12:31:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:12.413 00:22:12.413 real 12m2.131s 00:22:12.413 user 25m46.146s 00:22:12.413 sys 3m43.851s 00:22:12.413 12:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.413 12:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:12.413 ************************************ 00:22:12.413 END TEST nvmf_target_extra 00:22:12.413 ************************************ 00:22:12.413 12:31:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:12.413 12:31:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.413 12:31:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.413 12:31:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:12.671 ************************************ 00:22:12.671 START TEST nvmf_host 00:22:12.671 ************************************ 00:22:12.671 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:12.671 * Looking for test storage... 00:22:12.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.672 --rc genhtml_branch_coverage=1 00:22:12.672 --rc genhtml_function_coverage=1 00:22:12.672 --rc genhtml_legend=1 00:22:12.672 --rc geninfo_all_blocks=1 00:22:12.672 --rc geninfo_unexecuted_blocks=1 00:22:12.672 00:22:12.672 ' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.672 --rc genhtml_branch_coverage=1 00:22:12.672 --rc genhtml_function_coverage=1 00:22:12.672 --rc genhtml_legend=1 00:22:12.672 --rc geninfo_all_blocks=1 00:22:12.672 --rc geninfo_unexecuted_blocks=1 00:22:12.672 00:22:12.672 ' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.672 --rc genhtml_branch_coverage=1 00:22:12.672 --rc genhtml_function_coverage=1 00:22:12.672 --rc genhtml_legend=1 00:22:12.672 --rc geninfo_all_blocks=1 00:22:12.672 --rc geninfo_unexecuted_blocks=1 00:22:12.672 00:22:12.672 ' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.672 --rc genhtml_branch_coverage=1 00:22:12.672 --rc genhtml_function_coverage=1 00:22:12.672 --rc genhtml_legend=1 00:22:12.672 --rc geninfo_all_blocks=1 00:22:12.672 --rc geninfo_unexecuted_blocks=1 00:22:12.672 00:22:12.672 ' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.672 ************************************ 00:22:12.672 START TEST nvmf_multicontroller 00:22:12.672 ************************************ 00:22:12.672 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:12.932 * Looking for test storage... 00:22:12.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.932 --rc genhtml_branch_coverage=1 00:22:12.932 --rc genhtml_function_coverage=1 00:22:12.932 --rc genhtml_legend=1 00:22:12.932 --rc geninfo_all_blocks=1 00:22:12.932 --rc geninfo_unexecuted_blocks=1 00:22:12.932 00:22:12.932 ' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.932 --rc genhtml_branch_coverage=1 00:22:12.932 --rc genhtml_function_coverage=1 00:22:12.932 --rc genhtml_legend=1 00:22:12.932 --rc geninfo_all_blocks=1 00:22:12.932 --rc geninfo_unexecuted_blocks=1 00:22:12.932 00:22:12.932 ' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.932 --rc genhtml_branch_coverage=1 00:22:12.932 --rc genhtml_function_coverage=1 00:22:12.932 --rc genhtml_legend=1 00:22:12.932 --rc geninfo_all_blocks=1 00:22:12.932 --rc geninfo_unexecuted_blocks=1 00:22:12.932 00:22:12.932 ' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.932 --rc genhtml_branch_coverage=1 00:22:12.932 --rc genhtml_function_coverage=1 00:22:12.932 --rc genhtml_legend=1 00:22:12.932 --rc geninfo_all_blocks=1 00:22:12.932 --rc geninfo_unexecuted_blocks=1 00:22:12.932 00:22:12.932 ' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.932 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.933 12:31:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.505 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.506 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.506 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:22:19.506 00:22:19.506 --- 10.0.0.2 ping statistics --- 00:22:19.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.506 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:19.506 00:22:19.506 --- 10.0.0.1 ping statistics --- 00:22:19.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.506 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:19.506 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=511855 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 511855 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 511855 ']' 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.507 12:32:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 [2024-11-20 12:32:01.992396] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:19.507 [2024-11-20 12:32:01.992443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.507 [2024-11-20 12:32:02.073440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:19.507 [2024-11-20 12:32:02.114491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.507 [2024-11-20 12:32:02.114530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.507 [2024-11-20 12:32:02.114538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.507 [2024-11-20 12:32:02.114544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.507 [2024-11-20 12:32:02.114550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.507 [2024-11-20 12:32:02.116028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.507 [2024-11-20 12:32:02.116113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.507 [2024-11-20 12:32:02.116114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 [2024-11-20 12:32:02.260841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 Malloc0 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 [2024-11-20 12:32:02.322043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 [2024-11-20 12:32:02.329951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 Malloc1 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=511889 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 511889 /var/tmp/bdevperf.sock 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 511889 ']' 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.507 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.767 NVMe0n1 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.767 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.026 1 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.026 request: 00:22:20.026 { 00:22:20.026 "name": "NVMe0", 00:22:20.026 "trtype": "tcp", 00:22:20.026 "traddr": "10.0.0.2", 00:22:20.026 "adrfam": "ipv4", 00:22:20.026 "trsvcid": "4420", 00:22:20.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.026 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:20.026 "hostaddr": "10.0.0.1", 00:22:20.026 "prchk_reftag": false, 00:22:20.026 "prchk_guard": false, 00:22:20.026 "hdgst": false, 00:22:20.026 "ddgst": false, 00:22:20.026 "allow_unrecognized_csi": false, 00:22:20.026 "method": "bdev_nvme_attach_controller", 00:22:20.026 "req_id": 1 00:22:20.026 } 00:22:20.026 Got JSON-RPC error response 00:22:20.026 response: 00:22:20.026 { 00:22:20.026 "code": -114, 00:22:20.026 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:20.026 } 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.026 request: 00:22:20.026 { 00:22:20.026 "name": "NVMe0", 00:22:20.026 "trtype": "tcp", 00:22:20.026 "traddr": "10.0.0.2", 00:22:20.026 "adrfam": "ipv4", 00:22:20.026 "trsvcid": "4420", 00:22:20.026 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.026 "hostaddr": "10.0.0.1", 00:22:20.026 "prchk_reftag": false, 00:22:20.026 "prchk_guard": false, 00:22:20.026 "hdgst": false, 00:22:20.026 "ddgst": false, 00:22:20.026 "allow_unrecognized_csi": false, 00:22:20.026 "method": "bdev_nvme_attach_controller", 00:22:20.026 "req_id": 1 00:22:20.026 } 00:22:20.026 Got JSON-RPC error response 00:22:20.026 response: 00:22:20.026 { 00:22:20.026 "code": -114, 00:22:20.026 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:20.026 } 00:22:20.026 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.027 request: 00:22:20.027 { 00:22:20.027 "name": "NVMe0", 00:22:20.027 "trtype": "tcp", 00:22:20.027 "traddr": "10.0.0.2", 00:22:20.027 "adrfam": "ipv4", 00:22:20.027 "trsvcid": "4420", 00:22:20.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.027 "hostaddr": "10.0.0.1", 00:22:20.027 "prchk_reftag": false, 00:22:20.027 "prchk_guard": false, 00:22:20.027 "hdgst": false, 00:22:20.027 "ddgst": false, 00:22:20.027 "multipath": "disable", 00:22:20.027 "allow_unrecognized_csi": false, 00:22:20.027 "method": "bdev_nvme_attach_controller", 00:22:20.027 "req_id": 1 00:22:20.027 } 00:22:20.027 Got JSON-RPC error response 00:22:20.027 response: 00:22:20.027 { 00:22:20.027 "code": -114, 00:22:20.027 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:20.027 } 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.027 request: 00:22:20.027 { 00:22:20.027 "name": "NVMe0", 00:22:20.027 "trtype": "tcp", 00:22:20.027 "traddr": "10.0.0.2", 00:22:20.027 "adrfam": "ipv4", 00:22:20.027 "trsvcid": "4420", 00:22:20.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.027 "hostaddr": "10.0.0.1", 00:22:20.027 "prchk_reftag": false, 00:22:20.027 "prchk_guard": false, 00:22:20.027 "hdgst": false, 00:22:20.027 "ddgst": false, 00:22:20.027 "multipath": "failover", 00:22:20.027 "allow_unrecognized_csi": false, 00:22:20.027 "method": "bdev_nvme_attach_controller", 00:22:20.027 "req_id": 1 00:22:20.027 } 00:22:20.027 Got JSON-RPC error response 00:22:20.027 response: 00:22:20.027 { 00:22:20.027 "code": -114, 00:22:20.027 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:20.027 } 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.027 12:32:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.286 NVMe0n1 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.286 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:20.286 12:32:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.664 { 00:22:21.664 "results": [ 00:22:21.664 { 00:22:21.664 "job": "NVMe0n1", 00:22:21.664 "core_mask": "0x1", 00:22:21.664 "workload": "write", 00:22:21.664 "status": "finished", 00:22:21.664 "queue_depth": 128, 00:22:21.664 "io_size": 4096, 00:22:21.664 "runtime": 1.007996, 00:22:21.664 "iops": 24273.905848832732, 00:22:21.664 "mibps": 94.81994472200286, 00:22:21.664 "io_failed": 0, 00:22:21.664 "io_timeout": 0, 00:22:21.664 "avg_latency_us": 5266.142226581657, 00:22:21.664 "min_latency_us": 2649.9339130434782, 00:22:21.664 "max_latency_us": 10485.76 00:22:21.664 } 00:22:21.664 ], 00:22:21.664 "core_count": 1 00:22:21.664 } 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 511889 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 511889 ']' 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 511889 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:21.664 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511889 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511889' 00:22:21.665 killing process with pid 511889 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 511889 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 511889 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:21.665 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:21.665 [2024-11-20 12:32:02.431683] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:21.665 [2024-11-20 12:32:02.431732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511889 ] 00:22:21.665 [2024-11-20 12:32:02.507399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.665 [2024-11-20 12:32:02.550187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.665 [2024-11-20 12:32:03.275485] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name d35b2520-12d8-48db-ab24-90005c367765 already exists 00:22:21.665 [2024-11-20 12:32:03.275514] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:d35b2520-12d8-48db-ab24-90005c367765 alias for bdev NVMe1n1 00:22:21.665 [2024-11-20 12:32:03.275522] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:21.665 Running I/O for 1 seconds... 00:22:21.665 24213.00 IOPS, 94.58 MiB/s 00:22:21.665 Latency(us) 00:22:21.665 [2024-11-20T11:32:04.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.665 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:21.665 NVMe0n1 : 1.01 24273.91 94.82 0.00 0.00 5266.14 2649.93 10485.76 00:22:21.665 [2024-11-20T11:32:04.781Z] =================================================================================================================== 00:22:21.665 [2024-11-20T11:32:04.781Z] Total : 24273.91 94.82 0.00 0.00 5266.14 2649.93 10485.76 00:22:21.665 Received shutdown signal, test time was about 1.000000 seconds 00:22:21.665 00:22:21.665 Latency(us) 00:22:21.665 [2024-11-20T11:32:04.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.665 [2024-11-20T11:32:04.781Z] =================================================================================================================== 00:22:21.665 [2024-11-20T11:32:04.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.665 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.665 rmmod nvme_tcp 00:22:21.665 rmmod nvme_fabrics 00:22:21.665 rmmod nvme_keyring 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 511855 ']' 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 511855 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 511855 ']' 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 511855 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.665 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511855 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511855' 00:22:21.925 killing process with pid 511855 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 511855 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 511855 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:21.925 12:32:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.925 12:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.925 12:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.925 12:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.925 12:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.925 12:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.925 12:32:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.462 00:22:24.462 real 0m11.296s 00:22:24.462 user 0m12.694s 00:22:24.462 sys 0m5.244s 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:24.462 ************************************ 00:22:24.462 END TEST nvmf_multicontroller 00:22:24.462 ************************************ 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.462 ************************************ 00:22:24.462 START TEST nvmf_aer 00:22:24.462 ************************************ 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:24.462 * Looking for test storage... 00:22:24.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.462 --rc genhtml_branch_coverage=1 00:22:24.462 --rc genhtml_function_coverage=1 00:22:24.462 --rc genhtml_legend=1 00:22:24.462 --rc geninfo_all_blocks=1 00:22:24.462 --rc geninfo_unexecuted_blocks=1 00:22:24.462 00:22:24.462 ' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.462 --rc genhtml_branch_coverage=1 00:22:24.462 --rc genhtml_function_coverage=1 00:22:24.462 --rc genhtml_legend=1 00:22:24.462 --rc geninfo_all_blocks=1 00:22:24.462 --rc geninfo_unexecuted_blocks=1 00:22:24.462 00:22:24.462 ' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.462 --rc genhtml_branch_coverage=1 00:22:24.462 --rc genhtml_function_coverage=1 00:22:24.462 --rc genhtml_legend=1 00:22:24.462 --rc geninfo_all_blocks=1 00:22:24.462 --rc geninfo_unexecuted_blocks=1 00:22:24.462 00:22:24.462 ' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.462 --rc genhtml_branch_coverage=1 00:22:24.462 --rc genhtml_function_coverage=1 00:22:24.462 --rc genhtml_legend=1 00:22:24.462 --rc geninfo_all_blocks=1 00:22:24.462 --rc geninfo_unexecuted_blocks=1 00:22:24.462 00:22:24.462 ' 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.462 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.463 12:32:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.034 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.034 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.035 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.035 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:22:31.035 00:22:31.035 --- 10.0.0.2 ping statistics --- 00:22:31.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.035 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:22:31.035 00:22:31.035 --- 10.0.0.1 ping statistics --- 00:22:31.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.035 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=515876 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 515876 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 515876 ']' 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.035 [2024-11-20 12:32:13.376545] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:31.035 [2024-11-20 12:32:13.376592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.035 [2024-11-20 12:32:13.457139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.035 [2024-11-20 12:32:13.498506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.035 [2024-11-20 12:32:13.498548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.035 [2024-11-20 12:32:13.498555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.035 [2024-11-20 12:32:13.498561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.035 [2024-11-20 12:32:13.498566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.035 [2024-11-20 12:32:13.500030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.035 [2024-11-20 12:32:13.500141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.035 [2024-11-20 12:32:13.500226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.035 [2024-11-20 12:32:13.500227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.035 [2024-11-20 12:32:13.645977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.035 Malloc0 00:22:31.035 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 [2024-11-20 12:32:13.712077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 [ 00:22:31.036 { 00:22:31.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.036 "subtype": "Discovery", 00:22:31.036 "listen_addresses": [], 00:22:31.036 "allow_any_host": true, 00:22:31.036 "hosts": [] 00:22:31.036 }, 00:22:31.036 { 00:22:31.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.036 "subtype": "NVMe", 00:22:31.036 "listen_addresses": [ 00:22:31.036 { 00:22:31.036 "trtype": "TCP", 00:22:31.036 "adrfam": "IPv4", 00:22:31.036 "traddr": "10.0.0.2", 00:22:31.036 "trsvcid": "4420" 00:22:31.036 } 00:22:31.036 ], 00:22:31.036 "allow_any_host": true, 00:22:31.036 "hosts": [], 00:22:31.036 "serial_number": "SPDK00000000000001", 00:22:31.036 "model_number": "SPDK bdev Controller", 00:22:31.036 "max_namespaces": 2, 00:22:31.036 "min_cntlid": 1, 00:22:31.036 "max_cntlid": 65519, 00:22:31.036 "namespaces": [ 00:22:31.036 { 00:22:31.036 "nsid": 1, 00:22:31.036 "bdev_name": "Malloc0", 00:22:31.036 "name": "Malloc0", 00:22:31.036 "nguid": "EE09B994DEA04822B89907C60C7B5896", 00:22:31.036 "uuid": "ee09b994-dea0-4822-b899-07c60c7b5896" 00:22:31.036 } 00:22:31.036 ] 00:22:31.036 } 00:22:31.036 ] 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=515923 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:31.036 12:32:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 Malloc1 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 Asynchronous Event Request test 00:22:31.036 Attaching to 10.0.0.2 00:22:31.036 Attached to 10.0.0.2 00:22:31.036 Registering asynchronous event callbacks... 00:22:31.036 Starting namespace attribute notice tests for all controllers... 00:22:31.036 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:31.036 aer_cb - Changed Namespace 00:22:31.036 Cleaning up... 00:22:31.036 [ 00:22:31.036 { 00:22:31.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:31.036 "subtype": "Discovery", 00:22:31.036 "listen_addresses": [], 00:22:31.036 "allow_any_host": true, 00:22:31.036 "hosts": [] 00:22:31.036 }, 00:22:31.036 { 00:22:31.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.036 "subtype": "NVMe", 00:22:31.036 "listen_addresses": [ 00:22:31.036 { 00:22:31.036 "trtype": "TCP", 00:22:31.036 "adrfam": "IPv4", 00:22:31.036 "traddr": "10.0.0.2", 00:22:31.036 "trsvcid": "4420" 00:22:31.036 } 00:22:31.036 ], 00:22:31.036 "allow_any_host": true, 00:22:31.036 "hosts": [], 00:22:31.036 "serial_number": "SPDK00000000000001", 00:22:31.036 "model_number": "SPDK bdev Controller", 00:22:31.036 "max_namespaces": 2, 00:22:31.036 "min_cntlid": 1, 00:22:31.036 "max_cntlid": 65519, 00:22:31.036 "namespaces": [ 00:22:31.036 { 00:22:31.036 "nsid": 1, 00:22:31.036 "bdev_name": "Malloc0", 00:22:31.036 "name": "Malloc0", 00:22:31.036 "nguid": "EE09B994DEA04822B89907C60C7B5896", 00:22:31.036 "uuid": "ee09b994-dea0-4822-b899-07c60c7b5896" 00:22:31.036 }, 00:22:31.036 { 00:22:31.036 "nsid": 2, 00:22:31.036 "bdev_name": "Malloc1", 00:22:31.036 "name": "Malloc1", 00:22:31.036 "nguid": "4B8B52DBB78E4D0EA155D54A30EBC920", 00:22:31.036 "uuid": "4b8b52db-b78e-4d0e-a155-d54a30ebc920" 00:22:31.036 } 00:22:31.036 ] 00:22:31.036 } 00:22:31.036 ] 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 515923 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:31.036 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.296 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.296 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.296 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.296 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.297 rmmod nvme_tcp 00:22:31.297 rmmod nvme_fabrics 00:22:31.297 rmmod nvme_keyring 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 515876 ']' 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 515876 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 515876 ']' 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 515876 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515876 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515876' 00:22:31.297 killing process with pid 515876 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 515876 00:22:31.297 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 515876 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.557 12:32:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.465 00:22:33.465 real 0m9.372s 00:22:33.465 user 0m5.503s 00:22:33.465 sys 0m4.952s 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:33.465 ************************************ 00:22:33.465 END TEST nvmf_aer 00:22:33.465 ************************************ 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.465 12:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.725 ************************************ 00:22:33.725 START TEST nvmf_async_init 00:22:33.725 ************************************ 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:33.725 * Looking for test storage... 00:22:33.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.725 --rc genhtml_branch_coverage=1 00:22:33.725 --rc genhtml_function_coverage=1 00:22:33.725 --rc genhtml_legend=1 00:22:33.725 --rc geninfo_all_blocks=1 00:22:33.725 --rc geninfo_unexecuted_blocks=1 00:22:33.725 00:22:33.725 ' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.725 --rc genhtml_branch_coverage=1 00:22:33.725 --rc genhtml_function_coverage=1 00:22:33.725 --rc genhtml_legend=1 00:22:33.725 --rc geninfo_all_blocks=1 00:22:33.725 --rc geninfo_unexecuted_blocks=1 00:22:33.725 00:22:33.725 ' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.725 --rc genhtml_branch_coverage=1 00:22:33.725 --rc genhtml_function_coverage=1 00:22:33.725 --rc genhtml_legend=1 00:22:33.725 --rc geninfo_all_blocks=1 00:22:33.725 --rc geninfo_unexecuted_blocks=1 00:22:33.725 00:22:33.725 ' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.725 --rc genhtml_branch_coverage=1 00:22:33.725 --rc genhtml_function_coverage=1 00:22:33.725 --rc genhtml_legend=1 00:22:33.725 --rc geninfo_all_blocks=1 00:22:33.725 --rc geninfo_unexecuted_blocks=1 00:22:33.725 00:22:33.725 ' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.725 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=eeef2cb1b5f34b8b998d6aff8bc7d430 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.726 12:32:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.297 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.298 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.298 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.298 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.298 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:22:40.298 00:22:40.298 --- 10.0.0.2 ping statistics --- 00:22:40.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.298 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:22:40.298 00:22:40.298 --- 10.0.0.1 ping statistics --- 00:22:40.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.298 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=519457 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 519457 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 519457 ']' 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.298 12:32:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.298 [2024-11-20 12:32:22.800866] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:40.298 [2024-11-20 12:32:22.800919] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.298 [2024-11-20 12:32:22.882373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.298 [2024-11-20 12:32:22.924041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.298 [2024-11-20 12:32:22.924078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.298 [2024-11-20 12:32:22.924085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.298 [2024-11-20 12:32:22.924091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.298 [2024-11-20 12:32:22.924096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.298 [2024-11-20 12:32:22.924660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.298 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.298 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 [2024-11-20 12:32:23.060408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 null0 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g eeef2cb1b5f34b8b998d6aff8bc7d430 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 [2024-11-20 12:32:23.104661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 nvme0n1 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 [ 00:22:40.299 { 00:22:40.299 "name": "nvme0n1", 00:22:40.299 "aliases": [ 00:22:40.299 "eeef2cb1-b5f3-4b8b-998d-6aff8bc7d430" 00:22:40.299 ], 00:22:40.299 "product_name": "NVMe disk", 00:22:40.299 "block_size": 512, 00:22:40.299 "num_blocks": 2097152, 00:22:40.299 "uuid": "eeef2cb1-b5f3-4b8b-998d-6aff8bc7d430", 00:22:40.299 "numa_id": 1, 00:22:40.299 "assigned_rate_limits": { 00:22:40.299 "rw_ios_per_sec": 0, 00:22:40.299 "rw_mbytes_per_sec": 0, 00:22:40.299 "r_mbytes_per_sec": 0, 00:22:40.299 "w_mbytes_per_sec": 0 00:22:40.299 }, 00:22:40.299 "claimed": false, 00:22:40.299 "zoned": false, 00:22:40.299 "supported_io_types": { 00:22:40.299 "read": true, 00:22:40.299 "write": true, 00:22:40.299 "unmap": false, 00:22:40.299 "flush": true, 00:22:40.299 "reset": true, 00:22:40.299 "nvme_admin": true, 00:22:40.299 "nvme_io": true, 00:22:40.299 "nvme_io_md": false, 00:22:40.299 "write_zeroes": true, 00:22:40.299 "zcopy": false, 00:22:40.299 "get_zone_info": false, 00:22:40.299 "zone_management": false, 00:22:40.299 "zone_append": false, 00:22:40.299 "compare": true, 00:22:40.299 "compare_and_write": true, 00:22:40.299 "abort": true, 00:22:40.299 "seek_hole": false, 00:22:40.299 "seek_data": false, 00:22:40.299 "copy": true, 00:22:40.299 "nvme_iov_md": false 00:22:40.299 }, 00:22:40.299 "memory_domains": [ 00:22:40.299 { 00:22:40.299 "dma_device_id": "system", 00:22:40.299 "dma_device_type": 1 00:22:40.299 } 00:22:40.299 ], 00:22:40.299 "driver_specific": { 00:22:40.299 "nvme": [ 00:22:40.299 { 00:22:40.299 "trid": { 00:22:40.299 "trtype": "TCP", 00:22:40.299 "adrfam": "IPv4", 00:22:40.299 "traddr": "10.0.0.2", 00:22:40.299 "trsvcid": "4420", 00:22:40.299 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.299 }, 00:22:40.299 "ctrlr_data": { 00:22:40.299 "cntlid": 1, 00:22:40.299 "vendor_id": "0x8086", 00:22:40.299 "model_number": "SPDK bdev Controller", 00:22:40.299 "serial_number": "00000000000000000000", 00:22:40.299 "firmware_revision": "25.01", 00:22:40.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.299 "oacs": { 00:22:40.299 "security": 0, 00:22:40.299 "format": 0, 00:22:40.299 "firmware": 0, 00:22:40.299 "ns_manage": 0 00:22:40.299 }, 00:22:40.299 "multi_ctrlr": true, 00:22:40.299 "ana_reporting": false 00:22:40.299 }, 00:22:40.299 "vs": { 00:22:40.299 "nvme_version": "1.3" 00:22:40.299 }, 00:22:40.299 "ns_data": { 00:22:40.299 "id": 1, 00:22:40.299 "can_share": true 00:22:40.299 } 00:22:40.299 } 00:22:40.299 ], 00:22:40.299 "mp_policy": "active_passive" 00:22:40.299 } 00:22:40.299 } 00:22:40.299 ] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.299 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.299 [2024-11-20 12:32:23.370371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:40.299 [2024-11-20 12:32:23.370429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c5220 (9): Bad file descriptor 00:22:40.559 [2024-11-20 12:32:23.502032] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:40.559 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.559 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.559 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.559 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.559 [ 00:22:40.559 { 00:22:40.559 "name": "nvme0n1", 00:22:40.559 "aliases": [ 00:22:40.559 "eeef2cb1-b5f3-4b8b-998d-6aff8bc7d430" 00:22:40.559 ], 00:22:40.559 "product_name": "NVMe disk", 00:22:40.559 "block_size": 512, 00:22:40.559 "num_blocks": 2097152, 00:22:40.559 "uuid": "eeef2cb1-b5f3-4b8b-998d-6aff8bc7d430", 00:22:40.559 "numa_id": 1, 00:22:40.559 "assigned_rate_limits": { 00:22:40.559 "rw_ios_per_sec": 0, 00:22:40.559 "rw_mbytes_per_sec": 0, 00:22:40.559 "r_mbytes_per_sec": 0, 00:22:40.559 "w_mbytes_per_sec": 0 00:22:40.559 }, 00:22:40.559 "claimed": false, 00:22:40.559 "zoned": false, 00:22:40.559 "supported_io_types": { 00:22:40.559 "read": true, 00:22:40.559 "write": true, 00:22:40.559 "unmap": false, 00:22:40.559 "flush": true, 00:22:40.559 "reset": true, 00:22:40.559 "nvme_admin": true, 00:22:40.559 "nvme_io": true, 00:22:40.559 "nvme_io_md": false, 00:22:40.559 "write_zeroes": true, 00:22:40.559 "zcopy": false, 00:22:40.559 "get_zone_info": false, 00:22:40.559 "zone_management": false, 00:22:40.559 "zone_append": false, 00:22:40.559 "compare": true, 00:22:40.559 "compare_and_write": true, 00:22:40.559 "abort": true, 00:22:40.559 "seek_hole": false, 00:22:40.559 "seek_data": false, 00:22:40.559 "copy": true, 00:22:40.559 "nvme_iov_md": false 00:22:40.559 }, 00:22:40.559 "memory_domains": [ 00:22:40.559 { 00:22:40.559 "dma_device_id": "system", 00:22:40.559 "dma_device_type": 1 00:22:40.559 } 00:22:40.559 ], 00:22:40.559 "driver_specific": { 00:22:40.559 "nvme": [ 00:22:40.559 { 00:22:40.559 "trid": { 00:22:40.559 "trtype": "TCP", 00:22:40.559 "adrfam": "IPv4", 00:22:40.559 "traddr": "10.0.0.2", 00:22:40.559 "trsvcid": "4420", 00:22:40.559 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.559 }, 00:22:40.559 "ctrlr_data": { 00:22:40.559 "cntlid": 2, 00:22:40.559 "vendor_id": "0x8086", 00:22:40.559 "model_number": "SPDK bdev Controller", 00:22:40.559 "serial_number": "00000000000000000000", 00:22:40.559 "firmware_revision": "25.01", 00:22:40.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.559 "oacs": { 00:22:40.559 "security": 0, 00:22:40.559 "format": 0, 00:22:40.559 "firmware": 0, 00:22:40.559 "ns_manage": 0 00:22:40.559 }, 00:22:40.559 "multi_ctrlr": true, 00:22:40.559 "ana_reporting": false 00:22:40.559 }, 00:22:40.559 "vs": { 00:22:40.559 "nvme_version": "1.3" 00:22:40.559 }, 00:22:40.559 "ns_data": { 00:22:40.559 "id": 1, 00:22:40.559 "can_share": true 00:22:40.559 } 00:22:40.559 } 00:22:40.559 ], 00:22:40.559 "mp_policy": "active_passive" 00:22:40.559 } 00:22:40.559 } 00:22:40.560 ] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NN8cjzHM7e 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NN8cjzHM7e 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.NN8cjzHM7e 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 [2024-11-20 12:32:23.575005] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.560 [2024-11-20 12:32:23.575109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 [2024-11-20 12:32:23.595063] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.560 nvme0n1 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.560 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.560 [ 00:22:40.560 { 00:22:40.560 "name": "nvme0n1", 00:22:40.560 "aliases": [ 00:22:40.560 "eeef2cb1-b5f3-4b8b-998d-6aff8bc7d430" 00:22:40.560 ], 00:22:40.560 "product_name": "NVMe disk", 00:22:40.560 "block_size": 512, 00:22:40.560 "num_blocks": 2097152, 00:22:40.560 "uuid": "eeef2cb1-b5f3-4b8b-998d-6aff8bc7d430", 00:22:40.560 "numa_id": 1, 00:22:40.819 "assigned_rate_limits": { 00:22:40.819 "rw_ios_per_sec": 0, 00:22:40.819 "rw_mbytes_per_sec": 0, 00:22:40.819 "r_mbytes_per_sec": 0, 00:22:40.819 "w_mbytes_per_sec": 0 00:22:40.819 }, 00:22:40.819 "claimed": false, 00:22:40.819 "zoned": false, 00:22:40.819 "supported_io_types": { 00:22:40.826 "read": true, 00:22:40.826 "write": true, 00:22:40.826 "unmap": false, 00:22:40.826 "flush": true, 00:22:40.826 "reset": true, 00:22:40.826 "nvme_admin": true, 00:22:40.826 "nvme_io": true, 00:22:40.826 "nvme_io_md": false, 00:22:40.826 "write_zeroes": true, 00:22:40.826 "zcopy": false, 00:22:40.826 "get_zone_info": false, 00:22:40.826 "zone_management": false, 00:22:40.826 "zone_append": false, 00:22:40.826 "compare": true, 00:22:40.826 "compare_and_write": true, 00:22:40.826 "abort": true, 00:22:40.826 "seek_hole": false, 00:22:40.826 "seek_data": false, 00:22:40.826 "copy": true, 00:22:40.826 "nvme_iov_md": false 00:22:40.826 }, 00:22:40.826 "memory_domains": [ 00:22:40.826 { 00:22:40.826 "dma_device_id": "system", 00:22:40.826 "dma_device_type": 1 00:22:40.826 } 00:22:40.826 ], 00:22:40.826 "driver_specific": { 00:22:40.826 "nvme": [ 00:22:40.826 { 00:22:40.826 "trid": { 00:22:40.826 "trtype": "TCP", 00:22:40.826 "adrfam": "IPv4", 00:22:40.826 "traddr": "10.0.0.2", 00:22:40.826 "trsvcid": "4421", 00:22:40.826 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:40.826 }, 00:22:40.826 "ctrlr_data": { 00:22:40.826 "cntlid": 3, 00:22:40.826 "vendor_id": "0x8086", 00:22:40.826 "model_number": "SPDK bdev Controller", 00:22:40.826 "serial_number": "00000000000000000000", 00:22:40.826 "firmware_revision": "25.01", 00:22:40.826 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:40.826 "oacs": { 00:22:40.826 "security": 0, 00:22:40.826 "format": 0, 00:22:40.826 "firmware": 0, 00:22:40.826 "ns_manage": 0 00:22:40.826 }, 00:22:40.826 "multi_ctrlr": true, 00:22:40.826 "ana_reporting": false 00:22:40.826 }, 00:22:40.826 "vs": { 00:22:40.826 "nvme_version": "1.3" 00:22:40.826 }, 00:22:40.826 "ns_data": { 00:22:40.826 "id": 1, 00:22:40.826 "can_share": true 00:22:40.826 } 00:22:40.826 } 00:22:40.826 ], 00:22:40.826 "mp_policy": "active_passive" 00:22:40.826 } 00:22:40.826 } 00:22:40.826 ] 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.NN8cjzHM7e 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.826 rmmod nvme_tcp 00:22:40.826 rmmod nvme_fabrics 00:22:40.826 rmmod nvme_keyring 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 519457 ']' 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 519457 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 519457 ']' 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 519457 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519457 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.826 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519457' 00:22:40.826 killing process with pid 519457 00:22:40.827 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 519457 00:22:40.827 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 519457 00:22:41.085 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.086 12:32:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.990 00:22:42.990 real 0m9.442s 00:22:42.990 user 0m3.053s 00:22:42.990 sys 0m4.828s 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:42.990 ************************************ 00:22:42.990 END TEST nvmf_async_init 00:22:42.990 ************************************ 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.990 12:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.249 ************************************ 00:22:43.249 START TEST dma 00:22:43.249 ************************************ 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:43.249 * Looking for test storage... 00:22:43.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.249 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.250 --rc genhtml_branch_coverage=1 00:22:43.250 --rc genhtml_function_coverage=1 00:22:43.250 --rc genhtml_legend=1 00:22:43.250 --rc geninfo_all_blocks=1 00:22:43.250 --rc geninfo_unexecuted_blocks=1 00:22:43.250 00:22:43.250 ' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.250 --rc genhtml_branch_coverage=1 00:22:43.250 --rc genhtml_function_coverage=1 00:22:43.250 --rc genhtml_legend=1 00:22:43.250 --rc geninfo_all_blocks=1 00:22:43.250 --rc geninfo_unexecuted_blocks=1 00:22:43.250 00:22:43.250 ' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.250 --rc genhtml_branch_coverage=1 00:22:43.250 --rc genhtml_function_coverage=1 00:22:43.250 --rc genhtml_legend=1 00:22:43.250 --rc geninfo_all_blocks=1 00:22:43.250 --rc geninfo_unexecuted_blocks=1 00:22:43.250 00:22:43.250 ' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.250 --rc genhtml_branch_coverage=1 00:22:43.250 --rc genhtml_function_coverage=1 00:22:43.250 --rc genhtml_legend=1 00:22:43.250 --rc geninfo_all_blocks=1 00:22:43.250 --rc geninfo_unexecuted_blocks=1 00:22:43.250 00:22:43.250 ' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:43.250 00:22:43.250 real 0m0.209s 00:22:43.250 user 0m0.129s 00:22:43.250 sys 0m0.095s 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:43.250 ************************************ 00:22:43.250 END TEST dma 00:22:43.250 ************************************ 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.250 12:32:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.510 ************************************ 00:22:43.510 START TEST nvmf_identify 00:22:43.510 ************************************ 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:43.510 * Looking for test storage... 00:22:43.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.510 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.511 --rc genhtml_branch_coverage=1 00:22:43.511 --rc genhtml_function_coverage=1 00:22:43.511 --rc genhtml_legend=1 00:22:43.511 --rc geninfo_all_blocks=1 00:22:43.511 --rc geninfo_unexecuted_blocks=1 00:22:43.511 00:22:43.511 ' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.511 --rc genhtml_branch_coverage=1 00:22:43.511 --rc genhtml_function_coverage=1 00:22:43.511 --rc genhtml_legend=1 00:22:43.511 --rc geninfo_all_blocks=1 00:22:43.511 --rc geninfo_unexecuted_blocks=1 00:22:43.511 00:22:43.511 ' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.511 --rc genhtml_branch_coverage=1 00:22:43.511 --rc genhtml_function_coverage=1 00:22:43.511 --rc genhtml_legend=1 00:22:43.511 --rc geninfo_all_blocks=1 00:22:43.511 --rc geninfo_unexecuted_blocks=1 00:22:43.511 00:22:43.511 ' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.511 --rc genhtml_branch_coverage=1 00:22:43.511 --rc genhtml_function_coverage=1 00:22:43.511 --rc genhtml_legend=1 00:22:43.511 --rc geninfo_all_blocks=1 00:22:43.511 --rc geninfo_unexecuted_blocks=1 00:22:43.511 00:22:43.511 ' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.511 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.512 12:32:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:50.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:50.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:50.086 Found net devices under 0000:86:00.0: cvl_0_0 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.086 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:50.087 Found net devices under 0000:86:00.1: cvl_0_1 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:22:50.087 00:22:50.087 --- 10.0.0.2 ping statistics --- 00:22:50.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.087 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:50.087 00:22:50.087 --- 10.0.0.1 ping statistics --- 00:22:50.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.087 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=523275 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 523275 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 523275 ']' 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.087 12:32:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.087 [2024-11-20 12:32:32.609337] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:50.087 [2024-11-20 12:32:32.609381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.087 [2024-11-20 12:32:32.687758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.087 [2024-11-20 12:32:32.729121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.087 [2024-11-20 12:32:32.729161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.087 [2024-11-20 12:32:32.729169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.087 [2024-11-20 12:32:32.729176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.087 [2024-11-20 12:32:32.729181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.087 [2024-11-20 12:32:32.730809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.087 [2024-11-20 12:32:32.730919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.087 [2024-11-20 12:32:32.731005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.087 [2024-11-20 12:32:32.731006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.345 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.345 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:50.345 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.345 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.345 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 [2024-11-20 12:32:33.462722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 Malloc0 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 [2024-11-20 12:32:33.560603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.605 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.605 [ 00:22:50.605 { 00:22:50.605 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:50.605 "subtype": "Discovery", 00:22:50.605 "listen_addresses": [ 00:22:50.606 { 00:22:50.606 "trtype": "TCP", 00:22:50.606 "adrfam": "IPv4", 00:22:50.606 "traddr": "10.0.0.2", 00:22:50.606 "trsvcid": "4420" 00:22:50.606 } 00:22:50.606 ], 00:22:50.606 "allow_any_host": true, 00:22:50.606 "hosts": [] 00:22:50.606 }, 00:22:50.606 { 00:22:50.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.606 "subtype": "NVMe", 00:22:50.606 "listen_addresses": [ 00:22:50.606 { 00:22:50.606 "trtype": "TCP", 00:22:50.606 "adrfam": "IPv4", 00:22:50.606 "traddr": "10.0.0.2", 00:22:50.606 "trsvcid": "4420" 00:22:50.606 } 00:22:50.606 ], 00:22:50.606 "allow_any_host": true, 00:22:50.606 "hosts": [], 00:22:50.606 "serial_number": "SPDK00000000000001", 00:22:50.606 "model_number": "SPDK bdev Controller", 00:22:50.606 "max_namespaces": 32, 00:22:50.606 "min_cntlid": 1, 00:22:50.606 "max_cntlid": 65519, 00:22:50.606 "namespaces": [ 00:22:50.606 { 00:22:50.606 "nsid": 1, 00:22:50.606 "bdev_name": "Malloc0", 00:22:50.606 "name": "Malloc0", 00:22:50.606 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:50.606 "eui64": "ABCDEF0123456789", 00:22:50.606 "uuid": "e97bf9a4-c0b0-48fa-9341-3fe22be24b53" 00:22:50.606 } 00:22:50.606 ] 00:22:50.606 } 00:22:50.606 ] 00:22:50.606 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.606 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:50.606 [2024-11-20 12:32:33.614206] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:50.606 [2024-11-20 12:32:33.614255] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523524 ] 00:22:50.606 [2024-11-20 12:32:33.654909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:50.606 [2024-11-20 12:32:33.658962] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:50.606 [2024-11-20 12:32:33.658969] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:50.606 [2024-11-20 12:32:33.658980] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:50.606 [2024-11-20 12:32:33.658990] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:50.606 [2024-11-20 12:32:33.659589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:50.606 [2024-11-20 12:32:33.659625] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xebe690 0 00:22:50.606 [2024-11-20 12:32:33.673959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:50.606 [2024-11-20 12:32:33.673973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:50.606 [2024-11-20 12:32:33.673977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:50.606 [2024-11-20 12:32:33.673980] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:50.606 [2024-11-20 12:32:33.674011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.674016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.674020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.606 [2024-11-20 12:32:33.674032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:50.606 [2024-11-20 12:32:33.674049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.606 [2024-11-20 12:32:33.680955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.606 [2024-11-20 12:32:33.680964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.606 [2024-11-20 12:32:33.680967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.680971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.606 [2024-11-20 12:32:33.680980] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:50.606 [2024-11-20 12:32:33.680986] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:50.606 [2024-11-20 12:32:33.680991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:50.606 [2024-11-20 12:32:33.681004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.606 [2024-11-20 12:32:33.681021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 12:32:33.681035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.606 [2024-11-20 12:32:33.681203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.606 [2024-11-20 12:32:33.681209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.606 [2024-11-20 12:32:33.681212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.606 [2024-11-20 12:32:33.681221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:50.606 [2024-11-20 12:32:33.681227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:50.606 [2024-11-20 12:32:33.681234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.606 [2024-11-20 12:32:33.681247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 12:32:33.681257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.606 [2024-11-20 12:32:33.681320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.606 [2024-11-20 12:32:33.681326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.606 [2024-11-20 12:32:33.681329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.606 [2024-11-20 12:32:33.681338] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:50.606 [2024-11-20 12:32:33.681344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:50.606 [2024-11-20 12:32:33.681350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.606 [2024-11-20 12:32:33.681362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 12:32:33.681372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.606 [2024-11-20 12:32:33.681441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.606 [2024-11-20 12:32:33.681447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.606 [2024-11-20 12:32:33.681450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.606 [2024-11-20 12:32:33.681458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:50.606 [2024-11-20 12:32:33.681466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.606 [2024-11-20 12:32:33.681478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 12:32:33.681490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.606 [2024-11-20 12:32:33.681559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.606 [2024-11-20 12:32:33.681564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.606 [2024-11-20 12:32:33.681568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.606 [2024-11-20 12:32:33.681575] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:50.606 [2024-11-20 12:32:33.681579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:50.606 [2024-11-20 12:32:33.681586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:50.606 [2024-11-20 12:32:33.681694] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:50.606 [2024-11-20 12:32:33.681698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:50.606 [2024-11-20 12:32:33.681705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.606 [2024-11-20 12:32:33.681718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.606 [2024-11-20 12:32:33.681727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.606 [2024-11-20 12:32:33.681808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.606 [2024-11-20 12:32:33.681816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.606 [2024-11-20 12:32:33.681820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.606 [2024-11-20 12:32:33.681824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.606 [2024-11-20 12:32:33.681830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:50.607 [2024-11-20 12:32:33.681842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.681846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.681849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.681854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 12:32:33.681865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.607 [2024-11-20 12:32:33.681925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.607 [2024-11-20 12:32:33.681931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.607 [2024-11-20 12:32:33.681934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.681937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.607 [2024-11-20 12:32:33.681941] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:50.607 [2024-11-20 12:32:33.681945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:50.607 [2024-11-20 12:32:33.681958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:50.607 [2024-11-20 12:32:33.681973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:50.607 [2024-11-20 12:32:33.681982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.681985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.681991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 12:32:33.682002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.607 [2024-11-20 12:32:33.682102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.607 [2024-11-20 12:32:33.682108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.607 [2024-11-20 12:32:33.682111] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebe690): datao=0, datal=4096, cccid=0 00:22:50.607 [2024-11-20 12:32:33.682119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf20100) on tqpair(0xebe690): expected_datao=0, payload_size=4096 00:22:50.607 [2024-11-20 12:32:33.682123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682129] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682133] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.607 [2024-11-20 12:32:33.682148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.607 [2024-11-20 12:32:33.682151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.607 [2024-11-20 12:32:33.682160] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:50.607 [2024-11-20 12:32:33.682165] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:50.607 [2024-11-20 12:32:33.682168] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:50.607 [2024-11-20 12:32:33.682176] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:50.607 [2024-11-20 12:32:33.682180] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:50.607 [2024-11-20 12:32:33.682184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:50.607 [2024-11-20 12:32:33.682193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:50.607 [2024-11-20 12:32:33.682199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.607 [2024-11-20 12:32:33.682222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.607 [2024-11-20 12:32:33.682289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.607 [2024-11-20 12:32:33.682294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.607 [2024-11-20 12:32:33.682297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.607 [2024-11-20 12:32:33.682309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.607 [2024-11-20 12:32:33.682326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.607 [2024-11-20 12:32:33.682342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.607 [2024-11-20 12:32:33.682358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.607 [2024-11-20 12:32:33.682374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:50.607 [2024-11-20 12:32:33.682382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:50.607 [2024-11-20 12:32:33.682387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 12:32:33.682407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20100, cid 0, qid 0 00:22:50.607 [2024-11-20 12:32:33.682411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20280, cid 1, qid 0 00:22:50.607 [2024-11-20 12:32:33.682415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20400, cid 2, qid 0 00:22:50.607 [2024-11-20 12:32:33.682419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.607 [2024-11-20 12:32:33.682423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20700, cid 4, qid 0 00:22:50.607 [2024-11-20 12:32:33.682517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.607 [2024-11-20 12:32:33.682523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.607 [2024-11-20 12:32:33.682525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20700) on tqpair=0xebe690 00:22:50.607 [2024-11-20 12:32:33.682536] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:50.607 [2024-11-20 12:32:33.682540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:50.607 [2024-11-20 12:32:33.682551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebe690) 00:22:50.607 [2024-11-20 12:32:33.682560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.607 [2024-11-20 12:32:33.682570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20700, cid 4, qid 0 00:22:50.607 [2024-11-20 12:32:33.682640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.607 [2024-11-20 12:32:33.682646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.607 [2024-11-20 12:32:33.682649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682652] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebe690): datao=0, datal=4096, cccid=4 00:22:50.607 [2024-11-20 12:32:33.682656] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf20700) on tqpair(0xebe690): expected_datao=0, payload_size=4096 00:22:50.607 [2024-11-20 12:32:33.682660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682670] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.607 [2024-11-20 12:32:33.682674] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.725955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.872 [2024-11-20 12:32:33.725969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.872 [2024-11-20 12:32:33.725972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.725976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20700) on tqpair=0xebe690 00:22:50.872 [2024-11-20 12:32:33.725990] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:50.872 [2024-11-20 12:32:33.726012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebe690) 00:22:50.872 [2024-11-20 12:32:33.726023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.872 [2024-11-20 12:32:33.726030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebe690) 00:22:50.872 [2024-11-20 12:32:33.726041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.872 [2024-11-20 12:32:33.726057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20700, cid 4, qid 0 00:22:50.872 [2024-11-20 12:32:33.726062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20880, cid 5, qid 0 00:22:50.872 [2024-11-20 12:32:33.726232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.872 [2024-11-20 12:32:33.726238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.872 [2024-11-20 12:32:33.726241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726244] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebe690): datao=0, datal=1024, cccid=4 00:22:50.872 [2024-11-20 12:32:33.726248] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf20700) on tqpair(0xebe690): expected_datao=0, payload_size=1024 00:22:50.872 [2024-11-20 12:32:33.726252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726261] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.872 [2024-11-20 12:32:33.726271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.872 [2024-11-20 12:32:33.726277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.726280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20880) on tqpair=0xebe690 00:22:50.872 [2024-11-20 12:32:33.767118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.872 [2024-11-20 12:32:33.767132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.872 [2024-11-20 12:32:33.767135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20700) on tqpair=0xebe690 00:22:50.872 [2024-11-20 12:32:33.767151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebe690) 00:22:50.872 [2024-11-20 12:32:33.767162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.872 [2024-11-20 12:32:33.767178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20700, cid 4, qid 0 00:22:50.872 [2024-11-20 12:32:33.767344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.872 [2024-11-20 12:32:33.767350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.872 [2024-11-20 12:32:33.767353] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767357] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebe690): datao=0, datal=3072, cccid=4 00:22:50.872 [2024-11-20 12:32:33.767361] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf20700) on tqpair(0xebe690): expected_datao=0, payload_size=3072 00:22:50.872 [2024-11-20 12:32:33.767365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767371] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767374] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.872 [2024-11-20 12:32:33.767388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.872 [2024-11-20 12:32:33.767392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20700) on tqpair=0xebe690 00:22:50.872 [2024-11-20 12:32:33.767403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebe690) 00:22:50.872 [2024-11-20 12:32:33.767413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.872 [2024-11-20 12:32:33.767425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20700, cid 4, qid 0 00:22:50.872 [2024-11-20 12:32:33.767540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.872 [2024-11-20 12:32:33.767546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.872 [2024-11-20 12:32:33.767548] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767552] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebe690): datao=0, datal=8, cccid=4 00:22:50.872 [2024-11-20 12:32:33.767556] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf20700) on tqpair(0xebe690): expected_datao=0, payload_size=8 00:22:50.872 [2024-11-20 12:32:33.767559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767565] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.767568] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.811957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.872 [2024-11-20 12:32:33.811967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.872 [2024-11-20 12:32:33.811970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.872 [2024-11-20 12:32:33.811977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20700) on tqpair=0xebe690 00:22:50.872 ===================================================== 00:22:50.872 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:50.872 ===================================================== 00:22:50.872 Controller Capabilities/Features 00:22:50.872 ================================ 00:22:50.872 Vendor ID: 0000 00:22:50.872 Subsystem Vendor ID: 0000 00:22:50.872 Serial Number: .................... 00:22:50.872 Model Number: ........................................ 00:22:50.872 Firmware Version: 25.01 00:22:50.872 Recommended Arb Burst: 0 00:22:50.872 IEEE OUI Identifier: 00 00 00 00:22:50.872 Multi-path I/O 00:22:50.872 May have multiple subsystem ports: No 00:22:50.872 May have multiple controllers: No 00:22:50.872 Associated with SR-IOV VF: No 00:22:50.872 Max Data Transfer Size: 131072 00:22:50.872 Max Number of Namespaces: 0 00:22:50.872 Max Number of I/O Queues: 1024 00:22:50.872 NVMe Specification Version (VS): 1.3 00:22:50.872 NVMe Specification Version (Identify): 1.3 00:22:50.872 Maximum Queue Entries: 128 00:22:50.872 Contiguous Queues Required: Yes 00:22:50.872 Arbitration Mechanisms Supported 00:22:50.872 Weighted Round Robin: Not Supported 00:22:50.872 Vendor Specific: Not Supported 00:22:50.872 Reset Timeout: 15000 ms 00:22:50.872 Doorbell Stride: 4 bytes 00:22:50.872 NVM Subsystem Reset: Not Supported 00:22:50.872 Command Sets Supported 00:22:50.872 NVM Command Set: Supported 00:22:50.872 Boot Partition: Not Supported 00:22:50.872 Memory Page Size Minimum: 4096 bytes 00:22:50.872 Memory Page Size Maximum: 4096 bytes 00:22:50.872 Persistent Memory Region: Not Supported 00:22:50.872 Optional Asynchronous Events Supported 00:22:50.872 Namespace Attribute Notices: Not Supported 00:22:50.872 Firmware Activation Notices: Not Supported 00:22:50.872 ANA Change Notices: Not Supported 00:22:50.872 PLE Aggregate Log Change Notices: Not Supported 00:22:50.872 LBA Status Info Alert Notices: Not Supported 00:22:50.872 EGE Aggregate Log Change Notices: Not Supported 00:22:50.872 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.872 Zone Descriptor Change Notices: Not Supported 00:22:50.872 Discovery Log Change Notices: Supported 00:22:50.872 Controller Attributes 00:22:50.872 128-bit Host Identifier: Not Supported 00:22:50.872 Non-Operational Permissive Mode: Not Supported 00:22:50.872 NVM Sets: Not Supported 00:22:50.872 Read Recovery Levels: Not Supported 00:22:50.873 Endurance Groups: Not Supported 00:22:50.873 Predictable Latency Mode: Not Supported 00:22:50.873 Traffic Based Keep ALive: Not Supported 00:22:50.873 Namespace Granularity: Not Supported 00:22:50.873 SQ Associations: Not Supported 00:22:50.873 UUID List: Not Supported 00:22:50.873 Multi-Domain Subsystem: Not Supported 00:22:50.873 Fixed Capacity Management: Not Supported 00:22:50.873 Variable Capacity Management: Not Supported 00:22:50.873 Delete Endurance Group: Not Supported 00:22:50.873 Delete NVM Set: Not Supported 00:22:50.873 Extended LBA Formats Supported: Not Supported 00:22:50.873 Flexible Data Placement Supported: Not Supported 00:22:50.873 00:22:50.873 Controller Memory Buffer Support 00:22:50.873 ================================ 00:22:50.873 Supported: No 00:22:50.873 00:22:50.873 Persistent Memory Region Support 00:22:50.873 ================================ 00:22:50.873 Supported: No 00:22:50.873 00:22:50.873 Admin Command Set Attributes 00:22:50.873 ============================ 00:22:50.873 Security Send/Receive: Not Supported 00:22:50.873 Format NVM: Not Supported 00:22:50.873 Firmware Activate/Download: Not Supported 00:22:50.873 Namespace Management: Not Supported 00:22:50.873 Device Self-Test: Not Supported 00:22:50.873 Directives: Not Supported 00:22:50.873 NVMe-MI: Not Supported 00:22:50.873 Virtualization Management: Not Supported 00:22:50.873 Doorbell Buffer Config: Not Supported 00:22:50.873 Get LBA Status Capability: Not Supported 00:22:50.873 Command & Feature Lockdown Capability: Not Supported 00:22:50.873 Abort Command Limit: 1 00:22:50.873 Async Event Request Limit: 4 00:22:50.873 Number of Firmware Slots: N/A 00:22:50.873 Firmware Slot 1 Read-Only: N/A 00:22:50.873 Firmware Activation Without Reset: N/A 00:22:50.873 Multiple Update Detection Support: N/A 00:22:50.873 Firmware Update Granularity: No Information Provided 00:22:50.873 Per-Namespace SMART Log: No 00:22:50.873 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.873 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:50.873 Command Effects Log Page: Not Supported 00:22:50.873 Get Log Page Extended Data: Supported 00:22:50.873 Telemetry Log Pages: Not Supported 00:22:50.873 Persistent Event Log Pages: Not Supported 00:22:50.873 Supported Log Pages Log Page: May Support 00:22:50.873 Commands Supported & Effects Log Page: Not Supported 00:22:50.873 Feature Identifiers & Effects Log Page:May Support 00:22:50.873 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.873 Data Area 4 for Telemetry Log: Not Supported 00:22:50.873 Error Log Page Entries Supported: 128 00:22:50.873 Keep Alive: Not Supported 00:22:50.873 00:22:50.873 NVM Command Set Attributes 00:22:50.873 ========================== 00:22:50.873 Submission Queue Entry Size 00:22:50.873 Max: 1 00:22:50.873 Min: 1 00:22:50.873 Completion Queue Entry Size 00:22:50.873 Max: 1 00:22:50.873 Min: 1 00:22:50.873 Number of Namespaces: 0 00:22:50.873 Compare Command: Not Supported 00:22:50.873 Write Uncorrectable Command: Not Supported 00:22:50.873 Dataset Management Command: Not Supported 00:22:50.873 Write Zeroes Command: Not Supported 00:22:50.873 Set Features Save Field: Not Supported 00:22:50.873 Reservations: Not Supported 00:22:50.873 Timestamp: Not Supported 00:22:50.873 Copy: Not Supported 00:22:50.873 Volatile Write Cache: Not Present 00:22:50.873 Atomic Write Unit (Normal): 1 00:22:50.873 Atomic Write Unit (PFail): 1 00:22:50.873 Atomic Compare & Write Unit: 1 00:22:50.873 Fused Compare & Write: Supported 00:22:50.873 Scatter-Gather List 00:22:50.873 SGL Command Set: Supported 00:22:50.873 SGL Keyed: Supported 00:22:50.873 SGL Bit Bucket Descriptor: Not Supported 00:22:50.873 SGL Metadata Pointer: Not Supported 00:22:50.873 Oversized SGL: Not Supported 00:22:50.873 SGL Metadata Address: Not Supported 00:22:50.873 SGL Offset: Supported 00:22:50.873 Transport SGL Data Block: Not Supported 00:22:50.873 Replay Protected Memory Block: Not Supported 00:22:50.873 00:22:50.873 Firmware Slot Information 00:22:50.873 ========================= 00:22:50.873 Active slot: 0 00:22:50.873 00:22:50.873 00:22:50.873 Error Log 00:22:50.873 ========= 00:22:50.873 00:22:50.873 Active Namespaces 00:22:50.873 ================= 00:22:50.873 Discovery Log Page 00:22:50.873 ================== 00:22:50.873 Generation Counter: 2 00:22:50.873 Number of Records: 2 00:22:50.873 Record Format: 0 00:22:50.873 00:22:50.873 Discovery Log Entry 0 00:22:50.873 ---------------------- 00:22:50.873 Transport Type: 3 (TCP) 00:22:50.873 Address Family: 1 (IPv4) 00:22:50.873 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:50.873 Entry Flags: 00:22:50.873 Duplicate Returned Information: 1 00:22:50.873 Explicit Persistent Connection Support for Discovery: 1 00:22:50.873 Transport Requirements: 00:22:50.873 Secure Channel: Not Required 00:22:50.873 Port ID: 0 (0x0000) 00:22:50.873 Controller ID: 65535 (0xffff) 00:22:50.873 Admin Max SQ Size: 128 00:22:50.873 Transport Service Identifier: 4420 00:22:50.873 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:50.873 Transport Address: 10.0.0.2 00:22:50.873 Discovery Log Entry 1 00:22:50.873 ---------------------- 00:22:50.873 Transport Type: 3 (TCP) 00:22:50.873 Address Family: 1 (IPv4) 00:22:50.873 Subsystem Type: 2 (NVM Subsystem) 00:22:50.873 Entry Flags: 00:22:50.873 Duplicate Returned Information: 0 00:22:50.873 Explicit Persistent Connection Support for Discovery: 0 00:22:50.873 Transport Requirements: 00:22:50.873 Secure Channel: Not Required 00:22:50.873 Port ID: 0 (0x0000) 00:22:50.873 Controller ID: 65535 (0xffff) 00:22:50.873 Admin Max SQ Size: 128 00:22:50.873 Transport Service Identifier: 4420 00:22:50.873 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:50.873 Transport Address: 10.0.0.2 [2024-11-20 12:32:33.812056] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:50.873 [2024-11-20 12:32:33.812066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20100) on tqpair=0xebe690 00:22:50.873 [2024-11-20 12:32:33.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.873 [2024-11-20 12:32:33.812077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20280) on tqpair=0xebe690 00:22:50.873 [2024-11-20 12:32:33.812081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.873 [2024-11-20 12:32:33.812086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20400) on tqpair=0xebe690 00:22:50.873 [2024-11-20 12:32:33.812090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.873 [2024-11-20 12:32:33.812094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.873 [2024-11-20 12:32:33.812098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.873 [2024-11-20 12:32:33.812107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.873 [2024-11-20 12:32:33.812111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.873 [2024-11-20 12:32:33.812114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.873 [2024-11-20 12:32:33.812121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.873 [2024-11-20 12:32:33.812135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.873 [2024-11-20 12:32:33.812203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.873 [2024-11-20 12:32:33.812209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.873 [2024-11-20 12:32:33.812212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.873 [2024-11-20 12:32:33.812215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.873 [2024-11-20 12:32:33.812222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.873 [2024-11-20 12:32:33.812225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.873 [2024-11-20 12:32:33.812228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.873 [2024-11-20 12:32:33.812234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.873 [2024-11-20 12:32:33.812247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.873 [2024-11-20 12:32:33.812335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.873 [2024-11-20 12:32:33.812341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.873 [2024-11-20 12:32:33.812344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.873 [2024-11-20 12:32:33.812347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.873 [2024-11-20 12:32:33.812352] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:50.874 [2024-11-20 12:32:33.812356] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:50.874 [2024-11-20 12:32:33.812365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.812378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.812389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.812501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.812507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.812510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.812522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.812534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.812544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.812653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.812658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.812661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.812673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.812685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.812694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.812806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.812811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.812814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.812825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.812838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.812847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.812913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.812918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.812921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.812933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.812940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.812945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.812961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.813913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.813919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.813922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.874 [2024-11-20 12:32:33.813933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.874 [2024-11-20 12:32:33.813940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.874 [2024-11-20 12:32:33.813946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.874 [2024-11-20 12:32:33.813960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.874 [2024-11-20 12:32:33.814021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.874 [2024-11-20 12:32:33.814027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.874 [2024-11-20 12:32:33.814030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.814042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.875 [2024-11-20 12:32:33.814054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.814064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.875 [2024-11-20 12:32:33.814165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.814171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.814175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.814187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.875 [2024-11-20 12:32:33.814199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.814208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.875 [2024-11-20 12:32:33.814316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.814322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.814325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.814336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.875 [2024-11-20 12:32:33.814349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.814358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.875 [2024-11-20 12:32:33.814467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.814473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.814475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.814487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.875 [2024-11-20 12:32:33.814499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.814508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.875 [2024-11-20 12:32:33.814576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.814581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.814584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.814596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.814603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.875 [2024-11-20 12:32:33.814608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.814618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.875 [2024-11-20 12:32:33.817954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.817962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.817965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.817971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.817982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.817986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.817989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebe690) 00:22:50.875 [2024-11-20 12:32:33.817995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.818007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf20580, cid 3, qid 0 00:22:50.875 [2024-11-20 12:32:33.818160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.818165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.818169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.818172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf20580) on tqpair=0xebe690 00:22:50.875 [2024-11-20 12:32:33.818178] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:50.875 00:22:50.875 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:50.875 [2024-11-20 12:32:33.857395] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:22:50.875 [2024-11-20 12:32:33.857429] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523527 ] 00:22:50.875 [2024-11-20 12:32:33.896579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:50.875 [2024-11-20 12:32:33.896618] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:50.875 [2024-11-20 12:32:33.896622] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:50.875 [2024-11-20 12:32:33.896633] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:50.875 [2024-11-20 12:32:33.896642] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:50.875 [2024-11-20 12:32:33.900132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:50.875 [2024-11-20 12:32:33.900162] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x93e690 0 00:22:50.875 [2024-11-20 12:32:33.907958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:50.875 [2024-11-20 12:32:33.907972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:50.875 [2024-11-20 12:32:33.907976] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:50.875 [2024-11-20 12:32:33.907979] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:50.875 [2024-11-20 12:32:33.908006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.908011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.908014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.875 [2024-11-20 12:32:33.908024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:50.875 [2024-11-20 12:32:33.908042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.875 [2024-11-20 12:32:33.915956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.915968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.915971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.915975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.875 [2024-11-20 12:32:33.915985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:50.875 [2024-11-20 12:32:33.915991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:50.875 [2024-11-20 12:32:33.915996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:50.875 [2024-11-20 12:32:33.916007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.916011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.916014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.875 [2024-11-20 12:32:33.916021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.916034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.875 [2024-11-20 12:32:33.916188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.916194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.916197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.916200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.875 [2024-11-20 12:32:33.916205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:50.875 [2024-11-20 12:32:33.916212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:50.875 [2024-11-20 12:32:33.916218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.916222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.875 [2024-11-20 12:32:33.916225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.875 [2024-11-20 12:32:33.916231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.875 [2024-11-20 12:32:33.916241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.875 [2024-11-20 12:32:33.916305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.875 [2024-11-20 12:32:33.916311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.875 [2024-11-20 12:32:33.916314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.876 [2024-11-20 12:32:33.916322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:50.876 [2024-11-20 12:32:33.916329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:50.876 [2024-11-20 12:32:33.916334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.876 [2024-11-20 12:32:33.916347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.876 [2024-11-20 12:32:33.916356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.876 [2024-11-20 12:32:33.916422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.876 [2024-11-20 12:32:33.916428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.876 [2024-11-20 12:32:33.916433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.876 [2024-11-20 12:32:33.916441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:50.876 [2024-11-20 12:32:33.916450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.876 [2024-11-20 12:32:33.916462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.876 [2024-11-20 12:32:33.916471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.876 [2024-11-20 12:32:33.916540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.876 [2024-11-20 12:32:33.916546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.876 [2024-11-20 12:32:33.916549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.876 [2024-11-20 12:32:33.916556] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:50.876 [2024-11-20 12:32:33.916560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:50.876 [2024-11-20 12:32:33.916567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:50.876 [2024-11-20 12:32:33.916674] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:50.876 [2024-11-20 12:32:33.916679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:50.876 [2024-11-20 12:32:33.916685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.876 [2024-11-20 12:32:33.916697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.876 [2024-11-20 12:32:33.916708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.876 [2024-11-20 12:32:33.916769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.876 [2024-11-20 12:32:33.916774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.876 [2024-11-20 12:32:33.916777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.876 [2024-11-20 12:32:33.916784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:50.876 [2024-11-20 12:32:33.916792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.876 [2024-11-20 12:32:33.916804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.876 [2024-11-20 12:32:33.916814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.876 [2024-11-20 12:32:33.916887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.876 [2024-11-20 12:32:33.916894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.876 [2024-11-20 12:32:33.916897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.876 [2024-11-20 12:32:33.916905] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:50.876 [2024-11-20 12:32:33.916909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:50.876 [2024-11-20 12:32:33.916915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:50.876 [2024-11-20 12:32:33.916927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:50.876 [2024-11-20 12:32:33.916935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.916938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.876 [2024-11-20 12:32:33.916943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.876 [2024-11-20 12:32:33.916960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.876 [2024-11-20 12:32:33.917056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.876 [2024-11-20 12:32:33.917061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.876 [2024-11-20 12:32:33.917064] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917068] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=4096, cccid=0 00:22:50.876 [2024-11-20 12:32:33.917072] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0100) on tqpair(0x93e690): expected_datao=0, payload_size=4096 00:22:50.876 [2024-11-20 12:32:33.917075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917082] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917085] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.876 [2024-11-20 12:32:33.917100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.876 [2024-11-20 12:32:33.917103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.876 [2024-11-20 12:32:33.917113] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:50.876 [2024-11-20 12:32:33.917117] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:50.876 [2024-11-20 12:32:33.917121] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:50.876 [2024-11-20 12:32:33.917126] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:50.876 [2024-11-20 12:32:33.917130] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:50.876 [2024-11-20 12:32:33.917134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:50.876 [2024-11-20 12:32:33.917143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:50.876 [2024-11-20 12:32:33.917149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.876 [2024-11-20 12:32:33.917155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.876 [2024-11-20 12:32:33.917163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.876 [2024-11-20 12:32:33.917173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.877 [2024-11-20 12:32:33.917253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.877 [2024-11-20 12:32:33.917258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.877 [2024-11-20 12:32:33.917261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.877 [2024-11-20 12:32:33.917269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.877 [2024-11-20 12:32:33.917286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.877 [2024-11-20 12:32:33.917302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.877 [2024-11-20 12:32:33.917318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.877 [2024-11-20 12:32:33.917333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.877 [2024-11-20 12:32:33.917366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0100, cid 0, qid 0 00:22:50.877 [2024-11-20 12:32:33.917370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0280, cid 1, qid 0 00:22:50.877 [2024-11-20 12:32:33.917374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0400, cid 2, qid 0 00:22:50.877 [2024-11-20 12:32:33.917378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.877 [2024-11-20 12:32:33.917382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.877 [2024-11-20 12:32:33.917478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.877 [2024-11-20 12:32:33.917485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.877 [2024-11-20 12:32:33.917488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.877 [2024-11-20 12:32:33.917497] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:50.877 [2024-11-20 12:32:33.917502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:50.877 [2024-11-20 12:32:33.917541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.877 [2024-11-20 12:32:33.917606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.877 [2024-11-20 12:32:33.917612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.877 [2024-11-20 12:32:33.917615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.877 [2024-11-20 12:32:33.917671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.877 [2024-11-20 12:32:33.917705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.877 [2024-11-20 12:32:33.917781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.877 [2024-11-20 12:32:33.917787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.877 [2024-11-20 12:32:33.917790] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917793] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=4096, cccid=4 00:22:50.877 [2024-11-20 12:32:33.917797] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0700) on tqpair(0x93e690): expected_datao=0, payload_size=4096 00:22:50.877 [2024-11-20 12:32:33.917801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917807] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917810] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.877 [2024-11-20 12:32:33.917825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.877 [2024-11-20 12:32:33.917828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.877 [2024-11-20 12:32:33.917840] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:50.877 [2024-11-20 12:32:33.917848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.917862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.917871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.877 [2024-11-20 12:32:33.917882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.877 [2024-11-20 12:32:33.917964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.877 [2024-11-20 12:32:33.917970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.877 [2024-11-20 12:32:33.917973] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.917976] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=4096, cccid=4 00:22:50.877 [2024-11-20 12:32:33.917980] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0700) on tqpair(0x93e690): expected_datao=0, payload_size=4096 00:22:50.877 [2024-11-20 12:32:33.917984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.918000] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.918003] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.918041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.877 [2024-11-20 12:32:33.918047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.877 [2024-11-20 12:32:33.918050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.918053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.877 [2024-11-20 12:32:33.918063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.918072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:50.877 [2024-11-20 12:32:33.918078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.877 [2024-11-20 12:32:33.918082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.877 [2024-11-20 12:32:33.918087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.878 [2024-11-20 12:32:33.918170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.878 [2024-11-20 12:32:33.918176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.878 [2024-11-20 12:32:33.918179] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918182] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=4096, cccid=4 00:22:50.878 [2024-11-20 12:32:33.918186] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0700) on tqpair(0x93e690): expected_datao=0, payload_size=4096 00:22:50.878 [2024-11-20 12:32:33.918190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918200] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918203] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.878 [2024-11-20 12:32:33.918235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.878 [2024-11-20 12:32:33.918238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.878 [2024-11-20 12:32:33.918248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918281] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:50.878 [2024-11-20 12:32:33.918285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:50.878 [2024-11-20 12:32:33.918289] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:50.878 [2024-11-20 12:32:33.918301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.878 [2024-11-20 12:32:33.918340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.878 [2024-11-20 12:32:33.918344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0880, cid 5, qid 0 00:22:50.878 [2024-11-20 12:32:33.918434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.878 [2024-11-20 12:32:33.918439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.878 [2024-11-20 12:32:33.918443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.878 [2024-11-20 12:32:33.918451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.878 [2024-11-20 12:32:33.918456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.878 [2024-11-20 12:32:33.918459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0880) on tqpair=0x93e690 00:22:50.878 [2024-11-20 12:32:33.918471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0880, cid 5, qid 0 00:22:50.878 [2024-11-20 12:32:33.918554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.878 [2024-11-20 12:32:33.918560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.878 [2024-11-20 12:32:33.918563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0880) on tqpair=0x93e690 00:22:50.878 [2024-11-20 12:32:33.918574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918582] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0880, cid 5, qid 0 00:22:50.878 [2024-11-20 12:32:33.918653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.878 [2024-11-20 12:32:33.918659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.878 [2024-11-20 12:32:33.918662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0880) on tqpair=0x93e690 00:22:50.878 [2024-11-20 12:32:33.918672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0880, cid 5, qid 0 00:22:50.878 [2024-11-20 12:32:33.918755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.878 [2024-11-20 12:32:33.918760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.878 [2024-11-20 12:32:33.918763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0880) on tqpair=0x93e690 00:22:50.878 [2024-11-20 12:32:33.918778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.918825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x93e690) 00:22:50.878 [2024-11-20 12:32:33.918831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.878 [2024-11-20 12:32:33.918844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0880, cid 5, qid 0 00:22:50.878 [2024-11-20 12:32:33.918849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0700, cid 4, qid 0 00:22:50.878 [2024-11-20 12:32:33.918853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0a00, cid 6, qid 0 00:22:50.878 [2024-11-20 12:32:33.918857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0b80, cid 7, qid 0 00:22:50.878 [2024-11-20 12:32:33.919001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.878 [2024-11-20 12:32:33.919007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.878 [2024-11-20 12:32:33.919010] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919013] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=8192, cccid=5 00:22:50.878 [2024-11-20 12:32:33.919017] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0880) on tqpair(0x93e690): expected_datao=0, payload_size=8192 00:22:50.878 [2024-11-20 12:32:33.919020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919035] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.878 [2024-11-20 12:32:33.919047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.878 [2024-11-20 12:32:33.919050] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919053] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=512, cccid=4 00:22:50.878 [2024-11-20 12:32:33.919057] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0700) on tqpair(0x93e690): expected_datao=0, payload_size=512 00:22:50.878 [2024-11-20 12:32:33.919061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919066] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.878 [2024-11-20 12:32:33.919074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.878 [2024-11-20 12:32:33.919078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.879 [2024-11-20 12:32:33.919081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919084] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=512, cccid=6 00:22:50.879 [2024-11-20 12:32:33.919088] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0a00) on tqpair(0x93e690): expected_datao=0, payload_size=512 00:22:50.879 [2024-11-20 12:32:33.919092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919100] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:50.879 [2024-11-20 12:32:33.919109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:50.879 [2024-11-20 12:32:33.919112] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x93e690): datao=0, datal=4096, cccid=7 00:22:50.879 [2024-11-20 12:32:33.919119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9a0b80) on tqpair(0x93e690): expected_datao=0, payload_size=4096 00:22:50.879 [2024-11-20 12:32:33.919123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919128] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919131] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.879 [2024-11-20 12:32:33.919143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.879 [2024-11-20 12:32:33.919148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0880) on tqpair=0x93e690 00:22:50.879 [2024-11-20 12:32:33.919160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.879 [2024-11-20 12:32:33.919165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.879 [2024-11-20 12:32:33.919168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0700) on tqpair=0x93e690 00:22:50.879 [2024-11-20 12:32:33.919180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.879 [2024-11-20 12:32:33.919185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.879 [2024-11-20 12:32:33.919188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0a00) on tqpair=0x93e690 00:22:50.879 [2024-11-20 12:32:33.919197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.879 [2024-11-20 12:32:33.919202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.879 [2024-11-20 12:32:33.919205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.879 [2024-11-20 12:32:33.919208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0b80) on tqpair=0x93e690 00:22:50.879 ===================================================== 00:22:50.879 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.879 ===================================================== 00:22:50.879 Controller Capabilities/Features 00:22:50.879 ================================ 00:22:50.879 Vendor ID: 8086 00:22:50.879 Subsystem Vendor ID: 8086 00:22:50.879 Serial Number: SPDK00000000000001 00:22:50.879 Model Number: SPDK bdev Controller 00:22:50.879 Firmware Version: 25.01 00:22:50.879 Recommended Arb Burst: 6 00:22:50.879 IEEE OUI Identifier: e4 d2 5c 00:22:50.879 Multi-path I/O 00:22:50.879 May have multiple subsystem ports: Yes 00:22:50.879 May have multiple controllers: Yes 00:22:50.879 Associated with SR-IOV VF: No 00:22:50.879 Max Data Transfer Size: 131072 00:22:50.879 Max Number of Namespaces: 32 00:22:50.879 Max Number of I/O Queues: 127 00:22:50.879 NVMe Specification Version (VS): 1.3 00:22:50.879 NVMe Specification Version (Identify): 1.3 00:22:50.879 Maximum Queue Entries: 128 00:22:50.879 Contiguous Queues Required: Yes 00:22:50.879 Arbitration Mechanisms Supported 00:22:50.879 Weighted Round Robin: Not Supported 00:22:50.879 Vendor Specific: Not Supported 00:22:50.879 Reset Timeout: 15000 ms 00:22:50.879 Doorbell Stride: 4 bytes 00:22:50.879 NVM Subsystem Reset: Not Supported 00:22:50.879 Command Sets Supported 00:22:50.879 NVM Command Set: Supported 00:22:50.879 Boot Partition: Not Supported 00:22:50.879 Memory Page Size Minimum: 4096 bytes 00:22:50.879 Memory Page Size Maximum: 4096 bytes 00:22:50.879 Persistent Memory Region: Not Supported 00:22:50.879 Optional Asynchronous Events Supported 00:22:50.879 Namespace Attribute Notices: Supported 00:22:50.879 Firmware Activation Notices: Not Supported 00:22:50.879 ANA Change Notices: Not Supported 00:22:50.879 PLE Aggregate Log Change Notices: Not Supported 00:22:50.879 LBA Status Info Alert Notices: Not Supported 00:22:50.879 EGE Aggregate Log Change Notices: Not Supported 00:22:50.879 Normal NVM Subsystem Shutdown event: Not Supported 00:22:50.879 Zone Descriptor Change Notices: Not Supported 00:22:50.879 Discovery Log Change Notices: Not Supported 00:22:50.879 Controller Attributes 00:22:50.879 128-bit Host Identifier: Supported 00:22:50.879 Non-Operational Permissive Mode: Not Supported 00:22:50.879 NVM Sets: Not Supported 00:22:50.879 Read Recovery Levels: Not Supported 00:22:50.879 Endurance Groups: Not Supported 00:22:50.879 Predictable Latency Mode: Not Supported 00:22:50.879 Traffic Based Keep ALive: Not Supported 00:22:50.879 Namespace Granularity: Not Supported 00:22:50.879 SQ Associations: Not Supported 00:22:50.879 UUID List: Not Supported 00:22:50.879 Multi-Domain Subsystem: Not Supported 00:22:50.879 Fixed Capacity Management: Not Supported 00:22:50.879 Variable Capacity Management: Not Supported 00:22:50.879 Delete Endurance Group: Not Supported 00:22:50.879 Delete NVM Set: Not Supported 00:22:50.879 Extended LBA Formats Supported: Not Supported 00:22:50.879 Flexible Data Placement Supported: Not Supported 00:22:50.879 00:22:50.879 Controller Memory Buffer Support 00:22:50.879 ================================ 00:22:50.879 Supported: No 00:22:50.879 00:22:50.879 Persistent Memory Region Support 00:22:50.879 ================================ 00:22:50.879 Supported: No 00:22:50.879 00:22:50.879 Admin Command Set Attributes 00:22:50.879 ============================ 00:22:50.879 Security Send/Receive: Not Supported 00:22:50.879 Format NVM: Not Supported 00:22:50.879 Firmware Activate/Download: Not Supported 00:22:50.879 Namespace Management: Not Supported 00:22:50.879 Device Self-Test: Not Supported 00:22:50.879 Directives: Not Supported 00:22:50.879 NVMe-MI: Not Supported 00:22:50.879 Virtualization Management: Not Supported 00:22:50.879 Doorbell Buffer Config: Not Supported 00:22:50.879 Get LBA Status Capability: Not Supported 00:22:50.879 Command & Feature Lockdown Capability: Not Supported 00:22:50.879 Abort Command Limit: 4 00:22:50.879 Async Event Request Limit: 4 00:22:50.879 Number of Firmware Slots: N/A 00:22:50.879 Firmware Slot 1 Read-Only: N/A 00:22:50.879 Firmware Activation Without Reset: N/A 00:22:50.879 Multiple Update Detection Support: N/A 00:22:50.879 Firmware Update Granularity: No Information Provided 00:22:50.879 Per-Namespace SMART Log: No 00:22:50.879 Asymmetric Namespace Access Log Page: Not Supported 00:22:50.879 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:50.879 Command Effects Log Page: Supported 00:22:50.879 Get Log Page Extended Data: Supported 00:22:50.879 Telemetry Log Pages: Not Supported 00:22:50.879 Persistent Event Log Pages: Not Supported 00:22:50.879 Supported Log Pages Log Page: May Support 00:22:50.879 Commands Supported & Effects Log Page: Not Supported 00:22:50.879 Feature Identifiers & Effects Log Page:May Support 00:22:50.879 NVMe-MI Commands & Effects Log Page: May Support 00:22:50.879 Data Area 4 for Telemetry Log: Not Supported 00:22:50.879 Error Log Page Entries Supported: 128 00:22:50.879 Keep Alive: Supported 00:22:50.879 Keep Alive Granularity: 10000 ms 00:22:50.879 00:22:50.879 NVM Command Set Attributes 00:22:50.879 ========================== 00:22:50.879 Submission Queue Entry Size 00:22:50.879 Max: 64 00:22:50.879 Min: 64 00:22:50.879 Completion Queue Entry Size 00:22:50.879 Max: 16 00:22:50.879 Min: 16 00:22:50.879 Number of Namespaces: 32 00:22:50.879 Compare Command: Supported 00:22:50.879 Write Uncorrectable Command: Not Supported 00:22:50.879 Dataset Management Command: Supported 00:22:50.879 Write Zeroes Command: Supported 00:22:50.879 Set Features Save Field: Not Supported 00:22:50.879 Reservations: Supported 00:22:50.879 Timestamp: Not Supported 00:22:50.879 Copy: Supported 00:22:50.879 Volatile Write Cache: Present 00:22:50.879 Atomic Write Unit (Normal): 1 00:22:50.879 Atomic Write Unit (PFail): 1 00:22:50.879 Atomic Compare & Write Unit: 1 00:22:50.879 Fused Compare & Write: Supported 00:22:50.879 Scatter-Gather List 00:22:50.879 SGL Command Set: Supported 00:22:50.879 SGL Keyed: Supported 00:22:50.879 SGL Bit Bucket Descriptor: Not Supported 00:22:50.879 SGL Metadata Pointer: Not Supported 00:22:50.879 Oversized SGL: Not Supported 00:22:50.879 SGL Metadata Address: Not Supported 00:22:50.879 SGL Offset: Supported 00:22:50.879 Transport SGL Data Block: Not Supported 00:22:50.879 Replay Protected Memory Block: Not Supported 00:22:50.879 00:22:50.879 Firmware Slot Information 00:22:50.879 ========================= 00:22:50.879 Active slot: 1 00:22:50.880 Slot 1 Firmware Revision: 25.01 00:22:50.880 00:22:50.880 00:22:50.880 Commands Supported and Effects 00:22:50.880 ============================== 00:22:50.880 Admin Commands 00:22:50.880 -------------- 00:22:50.880 Get Log Page (02h): Supported 00:22:50.880 Identify (06h): Supported 00:22:50.880 Abort (08h): Supported 00:22:50.880 Set Features (09h): Supported 00:22:50.880 Get Features (0Ah): Supported 00:22:50.880 Asynchronous Event Request (0Ch): Supported 00:22:50.880 Keep Alive (18h): Supported 00:22:50.880 I/O Commands 00:22:50.880 ------------ 00:22:50.880 Flush (00h): Supported LBA-Change 00:22:50.880 Write (01h): Supported LBA-Change 00:22:50.880 Read (02h): Supported 00:22:50.880 Compare (05h): Supported 00:22:50.880 Write Zeroes (08h): Supported LBA-Change 00:22:50.880 Dataset Management (09h): Supported LBA-Change 00:22:50.880 Copy (19h): Supported LBA-Change 00:22:50.880 00:22:50.880 Error Log 00:22:50.880 ========= 00:22:50.880 00:22:50.880 Arbitration 00:22:50.880 =========== 00:22:50.880 Arbitration Burst: 1 00:22:50.880 00:22:50.880 Power Management 00:22:50.880 ================ 00:22:50.880 Number of Power States: 1 00:22:50.880 Current Power State: Power State #0 00:22:50.880 Power State #0: 00:22:50.880 Max Power: 0.00 W 00:22:50.880 Non-Operational State: Operational 00:22:50.880 Entry Latency: Not Reported 00:22:50.880 Exit Latency: Not Reported 00:22:50.880 Relative Read Throughput: 0 00:22:50.880 Relative Read Latency: 0 00:22:50.880 Relative Write Throughput: 0 00:22:50.880 Relative Write Latency: 0 00:22:50.880 Idle Power: Not Reported 00:22:50.880 Active Power: Not Reported 00:22:50.880 Non-Operational Permissive Mode: Not Supported 00:22:50.880 00:22:50.880 Health Information 00:22:50.880 ================== 00:22:50.880 Critical Warnings: 00:22:50.880 Available Spare Space: OK 00:22:50.880 Temperature: OK 00:22:50.880 Device Reliability: OK 00:22:50.880 Read Only: No 00:22:50.880 Volatile Memory Backup: OK 00:22:50.880 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:50.880 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:50.880 Available Spare: 0% 00:22:50.880 Available Spare Threshold: 0% 00:22:50.880 Life Percentage Used:[2024-11-20 12:32:33.919287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.919298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.919310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0b80, cid 7, qid 0 00:22:50.880 [2024-11-20 12:32:33.919384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.919390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.919393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0b80) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919422] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:50.880 [2024-11-20 12:32:33.919430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0100) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.880 [2024-11-20 12:32:33.919440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0280) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.880 [2024-11-20 12:32:33.919448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0400) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.880 [2024-11-20 12:32:33.919456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.880 [2024-11-20 12:32:33.919467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.919479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.919493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.880 [2024-11-20 12:32:33.919555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.919561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.919564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.919584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.919596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.880 [2024-11-20 12:32:33.919670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.919676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.919679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919686] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:50.880 [2024-11-20 12:32:33.919690] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:50.880 [2024-11-20 12:32:33.919698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.919710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.919719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.880 [2024-11-20 12:32:33.919787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.919793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.919795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.919819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.919829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.880 [2024-11-20 12:32:33.919885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.919891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.919894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.919905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.919911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.919918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.919929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.880 [2024-11-20 12:32:33.923955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.923963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.923966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.923969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.880 [2024-11-20 12:32:33.923979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.923983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.923986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x93e690) 00:22:50.880 [2024-11-20 12:32:33.923992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.880 [2024-11-20 12:32:33.924003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9a0580, cid 3, qid 0 00:22:50.880 [2024-11-20 12:32:33.924155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:50.880 [2024-11-20 12:32:33.924161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:50.880 [2024-11-20 12:32:33.924164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:50.880 [2024-11-20 12:32:33.924167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9a0580) on tqpair=0x93e690 00:22:50.881 [2024-11-20 12:32:33.924174] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:50.881 0% 00:22:50.881 Data Units Read: 0 00:22:50.881 Data Units Written: 0 00:22:50.881 Host Read Commands: 0 00:22:50.881 Host Write Commands: 0 00:22:50.881 Controller Busy Time: 0 minutes 00:22:50.881 Power Cycles: 0 00:22:50.881 Power On Hours: 0 hours 00:22:50.881 Unsafe Shutdowns: 0 00:22:50.881 Unrecoverable Media Errors: 0 00:22:50.881 Lifetime Error Log Entries: 0 00:22:50.881 Warning Temperature Time: 0 minutes 00:22:50.881 Critical Temperature Time: 0 minutes 00:22:50.881 00:22:50.881 Number of Queues 00:22:50.881 ================ 00:22:50.881 Number of I/O Submission Queues: 127 00:22:50.881 Number of I/O Completion Queues: 127 00:22:50.881 00:22:50.881 Active Namespaces 00:22:50.881 ================= 00:22:50.881 Namespace ID:1 00:22:50.881 Error Recovery Timeout: Unlimited 00:22:50.881 Command Set Identifier: NVM (00h) 00:22:50.881 Deallocate: Supported 00:22:50.881 Deallocated/Unwritten Error: Not Supported 00:22:50.881 Deallocated Read Value: Unknown 00:22:50.881 Deallocate in Write Zeroes: Not Supported 00:22:50.881 Deallocated Guard Field: 0xFFFF 00:22:50.881 Flush: Supported 00:22:50.881 Reservation: Supported 00:22:50.881 Namespace Sharing Capabilities: Multiple Controllers 00:22:50.881 Size (in LBAs): 131072 (0GiB) 00:22:50.881 Capacity (in LBAs): 131072 (0GiB) 00:22:50.881 Utilization (in LBAs): 131072 (0GiB) 00:22:50.881 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:50.881 EUI64: ABCDEF0123456789 00:22:50.881 UUID: e97bf9a4-c0b0-48fa-9341-3fe22be24b53 00:22:50.881 Thin Provisioning: Not Supported 00:22:50.881 Per-NS Atomic Units: Yes 00:22:50.881 Atomic Boundary Size (Normal): 0 00:22:50.881 Atomic Boundary Size (PFail): 0 00:22:50.881 Atomic Boundary Offset: 0 00:22:50.881 Maximum Single Source Range Length: 65535 00:22:50.881 Maximum Copy Length: 65535 00:22:50.881 Maximum Source Range Count: 1 00:22:50.881 NGUID/EUI64 Never Reused: No 00:22:50.881 Namespace Write Protected: No 00:22:50.881 Number of LBA Formats: 1 00:22:50.881 Current LBA Format: LBA Format #00 00:22:50.881 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:50.881 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.881 12:32:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.881 rmmod nvme_tcp 00:22:50.881 rmmod nvme_fabrics 00:22:51.139 rmmod nvme_keyring 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 523275 ']' 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 523275 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 523275 ']' 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 523275 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 523275 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.139 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.140 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 523275' 00:22:51.140 killing process with pid 523275 00:22:51.140 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 523275 00:22:51.140 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 523275 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.398 12:32:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:53.304 00:22:53.304 real 0m9.946s 00:22:53.304 user 0m8.017s 00:22:53.304 sys 0m4.901s 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.304 ************************************ 00:22:53.304 END TEST nvmf_identify 00:22:53.304 ************************************ 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.304 ************************************ 00:22:53.304 START TEST nvmf_perf 00:22:53.304 ************************************ 00:22:53.304 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.566 * Looking for test storage... 00:22:53.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:53.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.566 --rc genhtml_branch_coverage=1 00:22:53.566 --rc genhtml_function_coverage=1 00:22:53.566 --rc genhtml_legend=1 00:22:53.566 --rc geninfo_all_blocks=1 00:22:53.566 --rc geninfo_unexecuted_blocks=1 00:22:53.566 00:22:53.566 ' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:53.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.566 --rc genhtml_branch_coverage=1 00:22:53.566 --rc genhtml_function_coverage=1 00:22:53.566 --rc genhtml_legend=1 00:22:53.566 --rc geninfo_all_blocks=1 00:22:53.566 --rc geninfo_unexecuted_blocks=1 00:22:53.566 00:22:53.566 ' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:53.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.566 --rc genhtml_branch_coverage=1 00:22:53.566 --rc genhtml_function_coverage=1 00:22:53.566 --rc genhtml_legend=1 00:22:53.566 --rc geninfo_all_blocks=1 00:22:53.566 --rc geninfo_unexecuted_blocks=1 00:22:53.566 00:22:53.566 ' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:53.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.566 --rc genhtml_branch_coverage=1 00:22:53.566 --rc genhtml_function_coverage=1 00:22:53.566 --rc genhtml_legend=1 00:22:53.566 --rc geninfo_all_blocks=1 00:22:53.566 --rc geninfo_unexecuted_blocks=1 00:22:53.566 00:22:53.566 ' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.566 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:53.567 12:32:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.143 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.143 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.143 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.143 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.143 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:23:00.144 00:23:00.144 --- 10.0.0.2 ping statistics --- 00:23:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.144 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:23:00.144 00:23:00.144 --- 10.0.0.1 ping statistics --- 00:23:00.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.144 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=527054 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 527054 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 527054 ']' 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.144 [2024-11-20 12:32:42.669439] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:23:00.144 [2024-11-20 12:32:42.669487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.144 [2024-11-20 12:32:42.749390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.144 [2024-11-20 12:32:42.792165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.144 [2024-11-20 12:32:42.792203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.144 [2024-11-20 12:32:42.792210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.144 [2024-11-20 12:32:42.792216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.144 [2024-11-20 12:32:42.792221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.144 [2024-11-20 12:32:42.793647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.144 [2024-11-20 12:32:42.793739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.144 [2024-11-20 12:32:42.793844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.144 [2024-11-20 12:32:42.793845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:00.144 12:32:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:03.435 12:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:03.435 12:32:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:03.435 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:03.694 [2024-11-20 12:32:46.575340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.694 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.953 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.953 12:32:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.953 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:03.953 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:04.211 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.471 [2024-11-20 12:32:47.382305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.471 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:04.730 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:04.730 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:04.730 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:04.730 12:32:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:06.109 Initializing NVMe Controllers 00:23:06.109 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:06.109 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:06.109 Initialization complete. Launching workers. 00:23:06.109 ======================================================== 00:23:06.109 Latency(us) 00:23:06.109 Device Information : IOPS MiB/s Average min max 00:23:06.109 PCIE (0000:5e:00.0) NSID 1 from core 0: 97182.87 379.62 328.86 34.37 5199.44 00:23:06.109 ======================================================== 00:23:06.109 Total : 97182.87 379.62 328.86 34.37 5199.44 00:23:06.109 00:23:06.109 12:32:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.042 Initializing NVMe Controllers 00:23:07.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.042 Initialization complete. Launching workers. 00:23:07.042 ======================================================== 00:23:07.042 Latency(us) 00:23:07.042 Device Information : IOPS MiB/s Average min max 00:23:07.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 54.00 0.21 18883.39 105.62 45693.86 00:23:07.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16454.48 5007.63 50850.69 00:23:07.043 ======================================================== 00:23:07.043 Total : 115.00 0.45 17595.01 105.62 50850.69 00:23:07.043 00:23:07.043 12:32:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:08.421 Initializing NVMe Controllers 00:23:08.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:08.421 Initialization complete. Launching workers. 00:23:08.421 ======================================================== 00:23:08.421 Latency(us) 00:23:08.421 Device Information : IOPS MiB/s Average min max 00:23:08.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10838.65 42.34 2953.49 497.30 10120.27 00:23:08.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3808.50 14.88 8425.03 7030.69 15823.72 00:23:08.421 ======================================================== 00:23:08.421 Total : 14647.15 57.22 4376.18 497.30 15823.72 00:23:08.421 00:23:08.421 12:32:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:08.421 12:32:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:08.421 12:32:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:11.091 Initializing NVMe Controllers 00:23:11.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.091 Controller IO queue size 128, less than required. 00:23:11.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.091 Controller IO queue size 128, less than required. 00:23:11.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:11.091 Initialization complete. Launching workers. 00:23:11.091 ======================================================== 00:23:11.091 Latency(us) 00:23:11.091 Device Information : IOPS MiB/s Average min max 00:23:11.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1789.47 447.37 72663.26 46313.98 127166.25 00:23:11.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 621.30 155.32 216252.77 102755.60 319954.51 00:23:11.091 ======================================================== 00:23:11.091 Total : 2410.77 602.69 109668.72 46313.98 319954.51 00:23:11.091 00:23:11.091 12:32:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:11.091 No valid NVMe controllers or AIO or URING devices found 00:23:11.091 Initializing NVMe Controllers 00:23:11.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.092 Controller IO queue size 128, less than required. 00:23:11.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.092 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:11.092 Controller IO queue size 128, less than required. 00:23:11.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.092 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:11.092 WARNING: Some requested NVMe devices were skipped 00:23:11.092 12:32:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:13.625 Initializing NVMe Controllers 00:23:13.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.625 Controller IO queue size 128, less than required. 00:23:13.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.625 Controller IO queue size 128, less than required. 00:23:13.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:13.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:13.625 Initialization complete. Launching workers. 00:23:13.625 00:23:13.625 ==================== 00:23:13.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:13.625 TCP transport: 00:23:13.625 polls: 15053 00:23:13.625 idle_polls: 11889 00:23:13.625 sock_completions: 3164 00:23:13.625 nvme_completions: 6145 00:23:13.625 submitted_requests: 9332 00:23:13.625 queued_requests: 1 00:23:13.625 00:23:13.625 ==================== 00:23:13.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:13.625 TCP transport: 00:23:13.625 polls: 15172 00:23:13.625 idle_polls: 11334 00:23:13.625 sock_completions: 3838 00:23:13.625 nvme_completions: 6603 00:23:13.625 submitted_requests: 9924 00:23:13.625 queued_requests: 1 00:23:13.625 ======================================================== 00:23:13.625 Latency(us) 00:23:13.625 Device Information : IOPS MiB/s Average min max 00:23:13.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1534.30 383.57 85707.55 61815.40 128487.70 00:23:13.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1648.67 412.17 78200.22 41489.22 121768.50 00:23:13.625 ======================================================== 00:23:13.625 Total : 3182.97 795.74 81819.01 41489.22 128487.70 00:23:13.625 00:23:13.625 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:13.625 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.884 rmmod nvme_tcp 00:23:13.884 rmmod nvme_fabrics 00:23:13.884 rmmod nvme_keyring 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 527054 ']' 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 527054 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 527054 ']' 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 527054 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.884 12:32:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527054 00:23:14.143 12:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:14.143 12:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:14.143 12:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527054' 00:23:14.143 killing process with pid 527054 00:23:14.143 12:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 527054 00:23:14.143 12:32:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 527054 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.520 12:32:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.425 12:33:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.425 00:23:17.425 real 0m24.118s 00:23:17.425 user 1m2.298s 00:23:17.425 sys 0m8.443s 00:23:17.425 12:33:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.425 12:33:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.425 ************************************ 00:23:17.425 END TEST nvmf_perf 00:23:17.425 ************************************ 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.684 ************************************ 00:23:17.684 START TEST nvmf_fio_host 00:23:17.684 ************************************ 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.684 * Looking for test storage... 00:23:17.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.684 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.684 --rc genhtml_branch_coverage=1 00:23:17.684 --rc genhtml_function_coverage=1 00:23:17.684 --rc genhtml_legend=1 00:23:17.685 --rc geninfo_all_blocks=1 00:23:17.685 --rc geninfo_unexecuted_blocks=1 00:23:17.685 00:23:17.685 ' 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.685 --rc genhtml_branch_coverage=1 00:23:17.685 --rc genhtml_function_coverage=1 00:23:17.685 --rc genhtml_legend=1 00:23:17.685 --rc geninfo_all_blocks=1 00:23:17.685 --rc geninfo_unexecuted_blocks=1 00:23:17.685 00:23:17.685 ' 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.685 --rc genhtml_branch_coverage=1 00:23:17.685 --rc genhtml_function_coverage=1 00:23:17.685 --rc genhtml_legend=1 00:23:17.685 --rc geninfo_all_blocks=1 00:23:17.685 --rc geninfo_unexecuted_blocks=1 00:23:17.685 00:23:17.685 ' 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.685 --rc genhtml_branch_coverage=1 00:23:17.685 --rc genhtml_function_coverage=1 00:23:17.685 --rc genhtml_legend=1 00:23:17.685 --rc geninfo_all_blocks=1 00:23:17.685 --rc geninfo_unexecuted_blocks=1 00:23:17.685 00:23:17.685 ' 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.685 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.945 12:33:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.516 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:24.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:24.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:24.517 Found net devices under 0000:86:00.0: cvl_0_0 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:24.517 Found net devices under 0000:86:00.1: cvl_0_1 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:23:24.517 00:23:24.517 --- 10.0.0.2 ping statistics --- 00:23:24.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.517 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:24.517 00:23:24.517 --- 10.0.0.1 ping statistics --- 00:23:24.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.517 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=533278 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 533278 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 533278 ']' 00:23:24.517 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.518 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.518 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.518 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.518 12:33:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.518 [2024-11-20 12:33:06.833291] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:23:24.518 [2024-11-20 12:33:06.833337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.518 [2024-11-20 12:33:06.914423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.518 [2024-11-20 12:33:06.957514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.518 [2024-11-20 12:33:06.957551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.518 [2024-11-20 12:33:06.957558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.518 [2024-11-20 12:33:06.957565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.518 [2024-11-20 12:33:06.957570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.518 [2024-11-20 12:33:06.959097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.518 [2024-11-20 12:33:06.959204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.518 [2024-11-20 12:33:06.959310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.518 [2024-11-20 12:33:06.959311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:24.518 [2024-11-20 12:33:07.224281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:24.518 Malloc1 00:23:24.518 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.776 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:25.035 12:33:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.035 [2024-11-20 12:33:08.104257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.035 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:25.293 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:25.294 12:33:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:25.551 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:25.551 fio-3.35 00:23:25.551 Starting 1 thread 00:23:28.084 00:23:28.084 test: (groupid=0, jobs=1): err= 0: pid=533941: Wed Nov 20 12:33:11 2024 00:23:28.084 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2005msec) 00:23:28.084 slat (nsec): min=1591, max=256027, avg=1746.62, stdev=2277.98 00:23:28.084 clat (usec): min=3164, max=10838, avg=6113.62, stdev=502.82 00:23:28.084 lat (usec): min=3197, max=10840, avg=6115.37, stdev=502.75 00:23:28.084 clat percentiles (usec): 00:23:28.084 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:23:28.084 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:23:28.084 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:23:28.084 | 99.00th=[ 7504], 99.50th=[ 7898], 99.90th=[ 8717], 99.95th=[ 9634], 00:23:28.084 | 99.99th=[10814] 00:23:28.084 bw ( KiB/s): min=45816, max=47024, per=99.91%, avg=46386.00, stdev=496.27, samples=4 00:23:28.084 iops : min=11454, max=11756, avg=11596.50, stdev=124.07, samples=4 00:23:28.084 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.3MiB/2005msec); 0 zone resets 00:23:28.084 slat (nsec): min=1639, max=224681, avg=1815.24, stdev=1672.23 00:23:28.084 clat (usec): min=2431, max=9495, avg=4929.87, stdev=408.11 00:23:28.084 lat (usec): min=2446, max=9497, avg=4931.68, stdev=408.14 00:23:28.084 clat percentiles (usec): 00:23:28.084 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:23:28.084 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:23:28.084 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:23:28.084 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 7177], 99.95th=[ 8160], 00:23:28.084 | 99.99th=[ 9372] 00:23:28.084 bw ( KiB/s): min=45888, max=46400, per=100.00%, avg=46116.00, stdev=267.21, samples=4 00:23:28.084 iops : min=11472, max=11600, avg=11529.00, stdev=66.80, samples=4 00:23:28.084 lat (msec) : 4=0.57%, 10=99.40%, 20=0.02% 00:23:28.084 cpu : usr=74.55%, sys=24.35%, ctx=102, majf=0, minf=3 00:23:28.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:28.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:28.084 issued rwts: total=23272,23106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:28.084 00:23:28.084 Run status group 0 (all jobs): 00:23:28.084 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2005-2005msec 00:23:28.084 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.3MiB (94.6MB), run=2005-2005msec 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:28.084 12:33:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:28.343 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:28.343 fio-3.35 00:23:28.343 Starting 1 thread 00:23:30.878 00:23:30.878 test: (groupid=0, jobs=1): err= 0: pid=534676: Wed Nov 20 12:33:13 2024 00:23:30.878 read: IOPS=10.7k, BW=167MiB/s (175MB/s)(335MiB/2007msec) 00:23:30.878 slat (nsec): min=2525, max=88448, avg=2825.27, stdev=1306.24 00:23:30.878 clat (usec): min=1847, max=50874, avg=7031.08, stdev=3523.00 00:23:30.878 lat (usec): min=1849, max=50876, avg=7033.91, stdev=3523.08 00:23:30.878 clat percentiles (usec): 00:23:30.878 | 1.00th=[ 3556], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5407], 00:23:30.878 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:23:30.878 | 70.00th=[ 7635], 80.00th=[ 8029], 90.00th=[ 8979], 95.00th=[ 9634], 00:23:30.878 | 99.00th=[11994], 99.50th=[45351], 99.90th=[50070], 99.95th=[50594], 00:23:30.878 | 99.99th=[50594] 00:23:30.878 bw ( KiB/s): min=81056, max=92896, per=50.11%, avg=85624.00, stdev=5507.80, samples=4 00:23:30.878 iops : min= 5066, max= 5806, avg=5351.50, stdev=344.24, samples=4 00:23:30.878 write: IOPS=6231, BW=97.4MiB/s (102MB/s)(175MiB/1794msec); 0 zone resets 00:23:30.878 slat (usec): min=29, max=408, avg=31.80, stdev= 7.72 00:23:30.878 clat (usec): min=2732, max=15838, avg=8747.93, stdev=1548.73 00:23:30.878 lat (usec): min=2762, max=15949, avg=8779.73, stdev=1550.54 00:23:30.878 clat percentiles (usec): 00:23:30.878 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7504], 00:23:30.878 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:23:30.878 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11600], 00:23:30.878 | 99.00th=[13173], 99.50th=[14091], 99.90th=[15401], 99.95th=[15664], 00:23:30.878 | 99.99th=[15795] 00:23:30.878 bw ( KiB/s): min=84384, max=96544, per=89.19%, avg=88936.00, stdev=5807.35, samples=4 00:23:30.878 iops : min= 5274, max= 6034, avg=5558.50, stdev=362.96, samples=4 00:23:30.878 lat (msec) : 2=0.02%, 4=1.84%, 10=89.34%, 20=8.41%, 50=0.33% 00:23:30.878 lat (msec) : 100=0.06% 00:23:30.878 cpu : usr=86.19%, sys=13.11%, ctx=45, majf=0, minf=3 00:23:30.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:30.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.878 issued rwts: total=21433,11180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.878 00:23:30.878 Run status group 0 (all jobs): 00:23:30.878 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=335MiB (351MB), run=2007-2007msec 00:23:30.878 WRITE: bw=97.4MiB/s (102MB/s), 97.4MiB/s-97.4MiB/s (102MB/s-102MB/s), io=175MiB (183MB), run=1794-1794msec 00:23:30.878 12:33:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.138 rmmod nvme_tcp 00:23:31.138 rmmod nvme_fabrics 00:23:31.138 rmmod nvme_keyring 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 533278 ']' 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 533278 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 533278 ']' 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 533278 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 533278 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 533278' 00:23:31.138 killing process with pid 533278 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 533278 00:23:31.138 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 533278 00:23:31.396 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.397 12:33:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.301 12:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.301 00:23:33.301 real 0m15.785s 00:23:33.301 user 0m46.304s 00:23:33.301 sys 0m6.486s 00:23:33.301 12:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.301 12:33:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.301 ************************************ 00:23:33.301 END TEST nvmf_fio_host 00:23:33.301 ************************************ 00:23:33.560 12:33:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:33.560 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.560 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.560 12:33:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.561 ************************************ 00:23:33.561 START TEST nvmf_failover 00:23:33.561 ************************************ 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:33.561 * Looking for test storage... 00:23:33.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.561 --rc genhtml_branch_coverage=1 00:23:33.561 --rc genhtml_function_coverage=1 00:23:33.561 --rc genhtml_legend=1 00:23:33.561 --rc geninfo_all_blocks=1 00:23:33.561 --rc geninfo_unexecuted_blocks=1 00:23:33.561 00:23:33.561 ' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.561 --rc genhtml_branch_coverage=1 00:23:33.561 --rc genhtml_function_coverage=1 00:23:33.561 --rc genhtml_legend=1 00:23:33.561 --rc geninfo_all_blocks=1 00:23:33.561 --rc geninfo_unexecuted_blocks=1 00:23:33.561 00:23:33.561 ' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.561 --rc genhtml_branch_coverage=1 00:23:33.561 --rc genhtml_function_coverage=1 00:23:33.561 --rc genhtml_legend=1 00:23:33.561 --rc geninfo_all_blocks=1 00:23:33.561 --rc geninfo_unexecuted_blocks=1 00:23:33.561 00:23:33.561 ' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:33.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.561 --rc genhtml_branch_coverage=1 00:23:33.561 --rc genhtml_function_coverage=1 00:23:33.561 --rc genhtml_legend=1 00:23:33.561 --rc geninfo_all_blocks=1 00:23:33.561 --rc geninfo_unexecuted_blocks=1 00:23:33.561 00:23:33.561 ' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:33.561 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.562 12:33:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:40.160 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:40.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:40.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:40.161 Found net devices under 0000:86:00.0: cvl_0_0 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:40.161 Found net devices under 0000:86:00.1: cvl_0_1 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:23:40.161 00:23:40.161 --- 10.0.0.2 ping statistics --- 00:23:40.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.161 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:23:40.161 00:23:40.161 --- 10.0.0.1 ping statistics --- 00:23:40.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.161 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.161 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=538598 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 538598 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 538598 ']' 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.162 [2024-11-20 12:33:22.645930] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:23:40.162 [2024-11-20 12:33:22.645978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.162 [2024-11-20 12:33:22.709514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.162 [2024-11-20 12:33:22.752188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.162 [2024-11-20 12:33:22.752222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.162 [2024-11-20 12:33:22.752229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.162 [2024-11-20 12:33:22.752235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.162 [2024-11-20 12:33:22.752241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.162 [2024-11-20 12:33:22.753614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.162 [2024-11-20 12:33:22.753724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.162 [2024-11-20 12:33:22.753724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.162 12:33:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:40.162 [2024-11-20 12:33:23.062677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.162 12:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:40.420 Malloc0 00:23:40.420 12:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.420 12:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.679 12:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.937 [2024-11-20 12:33:23.896856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.937 12:33:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.196 [2024-11-20 12:33:24.093395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:41.196 [2024-11-20 12:33:24.277967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=538849 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 538849 /var/tmp/bdevperf.sock 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 538849 ']' 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.196 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:41.454 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.454 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:41.454 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:42.022 NVMe0n1 00:23:42.022 12:33:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:42.281 00:23:42.281 12:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=539078 00:23:42.281 12:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.281 12:33:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:43.660 12:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.660 [2024-11-20 12:33:26.530039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.660 [2024-11-20 12:33:26.530218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 [2024-11-20 12:33:26.530459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137e2d0 is same with the state(6) to be set 00:23:43.661 12:33:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:46.967 12:33:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:46.967 00:23:46.967 12:33:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:47.228 [2024-11-20 12:33:30.150449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 [2024-11-20 12:33:30.150583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f060 is same with the state(6) to be set 00:23:47.228 12:33:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:50.518 12:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.518 [2024-11-20 12:33:33.363382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.518 12:33:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:51.459 12:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:51.726 [2024-11-20 12:33:34.577391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 [2024-11-20 12:33:34.577527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137fe30 is same with the state(6) to be set 00:23:51.726 12:33:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 539078 00:23:58.295 { 00:23:58.295 "results": [ 00:23:58.295 { 00:23:58.295 "job": "NVMe0n1", 00:23:58.295 "core_mask": "0x1", 00:23:58.295 "workload": "verify", 00:23:58.295 "status": "finished", 00:23:58.295 "verify_range": { 00:23:58.295 "start": 0, 00:23:58.295 "length": 16384 00:23:58.295 }, 00:23:58.295 "queue_depth": 128, 00:23:58.295 "io_size": 4096, 00:23:58.295 "runtime": 15.004927, 00:23:58.295 "iops": 10890.48950388096, 00:23:58.295 "mibps": 42.540974624535, 00:23:58.295 "io_failed": 13997, 00:23:58.295 "io_timeout": 0, 00:23:58.295 "avg_latency_us": 10803.83861973775, 00:23:58.295 "min_latency_us": 434.5321739130435, 00:23:58.295 "max_latency_us": 18236.104347826087 00:23:58.295 } 00:23:58.295 ], 00:23:58.296 "core_count": 1 00:23:58.296 } 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 538849 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 538849 ']' 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 538849 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 538849 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 538849' 00:23:58.296 killing process with pid 538849 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 538849 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 538849 00:23:58.296 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:58.296 [2024-11-20 12:33:24.350044] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:23:58.296 [2024-11-20 12:33:24.350097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid538849 ] 00:23:58.296 [2024-11-20 12:33:24.426937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.296 [2024-11-20 12:33:24.468577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.296 Running I/O for 15 seconds... 00:23:58.296 11067.00 IOPS, 43.23 MiB/s [2024-11-20T11:33:41.412Z] [2024-11-20 12:33:26.532113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.296 [2024-11-20 12:33:26.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.296 [2024-11-20 12:33:26.532632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.532986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.532994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.297 [2024-11-20 12:33:26.533152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.297 [2024-11-20 12:33:26.533168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.297 [2024-11-20 12:33:26.533183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.297 [2024-11-20 12:33:26.533197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.297 [2024-11-20 12:33:26.533212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.297 [2024-11-20 12:33:26.533227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.297 [2024-11-20 12:33:26.533235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.297 [2024-11-20 12:33:26.533241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.298 [2024-11-20 12:33:26.533826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.298 [2024-11-20 12:33:26.533834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.533985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.533993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:26.534092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.299 [2024-11-20 12:33:26.534117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.299 [2024-11-20 12:33:26.534124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:23:58.299 [2024-11-20 12:33:26.534132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534175] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:58.299 [2024-11-20 12:33:26.534197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.299 [2024-11-20 12:33:26.534205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.299 [2024-11-20 12:33:26.534220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.299 [2024-11-20 12:33:26.534234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.299 [2024-11-20 12:33:26.534248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:26.534255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:58.299 [2024-11-20 12:33:26.534283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b340 (9): Bad file descriptor 00:23:58.299 [2024-11-20 12:33:26.537110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:58.299 [2024-11-20 12:33:26.648924] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:58.299 10489.00 IOPS, 40.97 MiB/s [2024-11-20T11:33:41.415Z] 10661.00 IOPS, 41.64 MiB/s [2024-11-20T11:33:41.415Z] 10808.25 IOPS, 42.22 MiB/s [2024-11-20T11:33:41.415Z] [2024-11-20 12:33:30.151051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.299 [2024-11-20 12:33:30.151266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.299 [2024-11-20 12:33:30.151281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.299 [2024-11-20 12:33:30.151297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.299 [2024-11-20 12:33:30.151305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.300 [2024-11-20 12:33:30.151616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.300 [2024-11-20 12:33:30.151796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.300 [2024-11-20 12:33:30.151804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.151989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.151996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.301 [2024-11-20 12:33:30.152159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.301 [2024-11-20 12:33:30.152405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.301 [2024-11-20 12:33:30.152411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.302 [2024-11-20 12:33:30.152914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.152988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.152994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.302 [2024-11-20 12:33:30.153003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.302 [2024-11-20 12:33:30.153011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.303 [2024-11-20 12:33:30.153028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.303 [2024-11-20 12:33:30.153057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.303 [2024-11-20 12:33:30.153063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59888 len:8 PRP1 0x0 PRP2 0x0 00:23:58.303 [2024-11-20 12:33:30.153073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153117] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:58.303 [2024-11-20 12:33:30.153138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.303 [2024-11-20 12:33:30.153146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.303 [2024-11-20 12:33:30.153160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.303 [2024-11-20 12:33:30.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.303 [2024-11-20 12:33:30.153190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:30.153197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:58.303 [2024-11-20 12:33:30.156054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:58.303 [2024-11-20 12:33:30.156086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b340 (9): Bad file descriptor 00:23:58.303 [2024-11-20 12:33:30.223452] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:58.303 10700.60 IOPS, 41.80 MiB/s [2024-11-20T11:33:41.419Z] 10769.67 IOPS, 42.07 MiB/s [2024-11-20T11:33:41.419Z] 10795.71 IOPS, 42.17 MiB/s [2024-11-20T11:33:41.419Z] 10818.38 IOPS, 42.26 MiB/s [2024-11-20T11:33:41.419Z] 10860.78 IOPS, 42.42 MiB/s [2024-11-20T11:33:41.419Z] [2024-11-20 12:33:34.578180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.303 [2024-11-20 12:33:34.578215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.303 [2024-11-20 12:33:34.578661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.303 [2024-11-20 12:33:34.578670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.578987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.578994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.304 [2024-11-20 12:33:34.579195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.304 [2024-11-20 12:33:34.579202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:58.305 [2024-11-20 12:33:34.579739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.305 [2024-11-20 12:33:34.579769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:23:58.305 [2024-11-20 12:33:34.579776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.305 [2024-11-20 12:33:34.579791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.305 [2024-11-20 12:33:34.579796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:23:58.305 [2024-11-20 12:33:34.579803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.305 [2024-11-20 12:33:34.579810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.305 [2024-11-20 12:33:34.579815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.579979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.579985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.579990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.579996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77184 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77192 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77200 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77208 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77216 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77224 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77232 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77240 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77248 len:8 PRP1 0x0 PRP2 0x0 00:23:58.306 [2024-11-20 12:33:34.580370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.306 [2024-11-20 12:33:34.580377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.306 [2024-11-20 12:33:34.580383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.306 [2024-11-20 12:33:34.580388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77256 len:8 PRP1 0x0 PRP2 0x0 00:23:58.307 [2024-11-20 12:33:34.580395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.580401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.307 [2024-11-20 12:33:34.591077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.307 [2024-11-20 12:33:34.591093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77264 len:8 PRP1 0x0 PRP2 0x0 00:23:58.307 [2024-11-20 12:33:34.591103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.591114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:58.307 [2024-11-20 12:33:34.591124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:58.307 [2024-11-20 12:33:34.591132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77272 len:8 PRP1 0x0 PRP2 0x0 00:23:58.307 [2024-11-20 12:33:34.591142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.591191] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:58.307 [2024-11-20 12:33:34.591219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.307 [2024-11-20 12:33:34.591229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.591239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.307 [2024-11-20 12:33:34.591248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.591258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.307 [2024-11-20 12:33:34.591266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.591276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.307 [2024-11-20 12:33:34.591285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.307 [2024-11-20 12:33:34.591294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:58.307 [2024-11-20 12:33:34.591333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b340 (9): Bad file descriptor 00:23:58.307 [2024-11-20 12:33:34.595187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:58.307 [2024-11-20 12:33:34.704840] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:58.307 10762.10 IOPS, 42.04 MiB/s [2024-11-20T11:33:41.423Z] 10791.18 IOPS, 42.15 MiB/s [2024-11-20T11:33:41.423Z] 10816.33 IOPS, 42.25 MiB/s [2024-11-20T11:33:41.423Z] 10833.85 IOPS, 42.32 MiB/s [2024-11-20T11:33:41.423Z] 10871.79 IOPS, 42.47 MiB/s [2024-11-20T11:33:41.423Z] 10887.80 IOPS, 42.53 MiB/s 00:23:58.307 Latency(us) 00:23:58.307 [2024-11-20T11:33:41.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.307 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:58.307 Verification LBA range: start 0x0 length 0x4000 00:23:58.307 NVMe0n1 : 15.00 10890.49 42.54 932.83 0.00 10803.84 434.53 18236.10 00:23:58.307 [2024-11-20T11:33:41.423Z] =================================================================================================================== 00:23:58.307 [2024-11-20T11:33:41.423Z] Total : 10890.49 42.54 932.83 0.00 10803.84 434.53 18236.10 00:23:58.307 Received shutdown signal, test time was about 15.000000 seconds 00:23:58.307 00:23:58.307 Latency(us) 00:23:58.307 [2024-11-20T11:33:41.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.307 [2024-11-20T11:33:41.423Z] =================================================================================================================== 00:23:58.307 [2024-11-20T11:33:41.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=541595 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 541595 /var/tmp/bdevperf.sock 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 541595 ']' 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:58.307 12:33:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.307 [2024-11-20 12:33:41.153651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.307 12:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:58.307 [2024-11-20 12:33:41.342203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:58.307 12:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.566 NVMe0n1 00:23:58.566 12:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:59.133 00:23:59.133 12:33:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:59.392 00:23:59.392 12:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.392 12:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:59.392 12:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.651 12:33:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:02.938 12:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.938 12:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:02.938 12:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=542411 00:24:02.938 12:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.938 12:33:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 542411 00:24:04.315 { 00:24:04.315 "results": [ 00:24:04.315 { 00:24:04.315 "job": "NVMe0n1", 00:24:04.315 "core_mask": "0x1", 00:24:04.315 "workload": "verify", 00:24:04.315 "status": "finished", 00:24:04.315 "verify_range": { 00:24:04.315 "start": 0, 00:24:04.315 "length": 16384 00:24:04.315 }, 00:24:04.315 "queue_depth": 128, 00:24:04.315 "io_size": 4096, 00:24:04.315 "runtime": 1.05373, 00:24:04.315 "iops": 10513.12954931529, 00:24:04.315 "mibps": 41.06691230201285, 00:24:04.315 "io_failed": 0, 00:24:04.315 "io_timeout": 0, 00:24:04.315 "avg_latency_us": 11719.44059216465, 00:24:04.315 "min_latency_us": 2421.9826086956523, 00:24:04.315 "max_latency_us": 44222.55304347826 00:24:04.315 } 00:24:04.315 ], 00:24:04.315 "core_count": 1 00:24:04.315 } 00:24:04.315 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:04.315 [2024-11-20 12:33:40.769710] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:24:04.315 [2024-11-20 12:33:40.769761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541595 ] 00:24:04.315 [2024-11-20 12:33:40.844239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.315 [2024-11-20 12:33:40.882415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.315 [2024-11-20 12:33:42.663839] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:04.315 [2024-11-20 12:33:42.663887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.315 [2024-11-20 12:33:42.663898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.315 [2024-11-20 12:33:42.663907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.315 [2024-11-20 12:33:42.663914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.315 [2024-11-20 12:33:42.663926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.315 [2024-11-20 12:33:42.663934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.315 [2024-11-20 12:33:42.663941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.315 [2024-11-20 12:33:42.663953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.315 [2024-11-20 12:33:42.663960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:04.315 [2024-11-20 12:33:42.663985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:04.315 [2024-11-20 12:33:42.663998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657340 (9): Bad file descriptor 00:24:04.315 [2024-11-20 12:33:42.797118] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:04.315 Running I/O for 1 seconds... 00:24:04.315 10871.00 IOPS, 42.46 MiB/s 00:24:04.315 Latency(us) 00:24:04.315 [2024-11-20T11:33:47.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.315 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:04.315 Verification LBA range: start 0x0 length 0x4000 00:24:04.315 NVMe0n1 : 1.05 10513.13 41.07 0.00 0.00 11719.44 2421.98 44222.55 00:24:04.315 [2024-11-20T11:33:47.431Z] =================================================================================================================== 00:24:04.315 [2024-11-20T11:33:47.431Z] Total : 10513.13 41.07 0.00 0.00 11719.44 2421.98 44222.55 00:24:04.315 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.315 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:04.315 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.574 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.574 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:04.833 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.833 12:33:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:08.124 12:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:08.124 12:33:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 541595 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 541595 ']' 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 541595 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 541595 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 541595' 00:24:08.124 killing process with pid 541595 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 541595 00:24:08.124 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 541595 00:24:08.383 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:08.383 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.642 rmmod nvme_tcp 00:24:08.642 rmmod nvme_fabrics 00:24:08.642 rmmod nvme_keyring 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 538598 ']' 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 538598 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 538598 ']' 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 538598 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 538598 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 538598' 00:24:08.642 killing process with pid 538598 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 538598 00:24:08.642 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 538598 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.905 12:33:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.034 00:24:11.034 real 0m37.460s 00:24:11.034 user 1m58.774s 00:24:11.034 sys 0m7.880s 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:11.034 ************************************ 00:24:11.034 END TEST nvmf_failover 00:24:11.034 ************************************ 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.034 ************************************ 00:24:11.034 START TEST nvmf_host_discovery 00:24:11.034 ************************************ 00:24:11.034 12:33:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:11.034 * Looking for test storage... 00:24:11.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.034 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:11.034 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:11.034 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.295 --rc genhtml_branch_coverage=1 00:24:11.295 --rc genhtml_function_coverage=1 00:24:11.295 --rc genhtml_legend=1 00:24:11.295 --rc geninfo_all_blocks=1 00:24:11.295 --rc geninfo_unexecuted_blocks=1 00:24:11.295 00:24:11.295 ' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.295 --rc genhtml_branch_coverage=1 00:24:11.295 --rc genhtml_function_coverage=1 00:24:11.295 --rc genhtml_legend=1 00:24:11.295 --rc geninfo_all_blocks=1 00:24:11.295 --rc geninfo_unexecuted_blocks=1 00:24:11.295 00:24:11.295 ' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.295 --rc genhtml_branch_coverage=1 00:24:11.295 --rc genhtml_function_coverage=1 00:24:11.295 --rc genhtml_legend=1 00:24:11.295 --rc geninfo_all_blocks=1 00:24:11.295 --rc geninfo_unexecuted_blocks=1 00:24:11.295 00:24:11.295 ' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.295 --rc genhtml_branch_coverage=1 00:24:11.295 --rc genhtml_function_coverage=1 00:24:11.295 --rc genhtml_legend=1 00:24:11.295 --rc geninfo_all_blocks=1 00:24:11.295 --rc geninfo_unexecuted_blocks=1 00:24:11.295 00:24:11.295 ' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.295 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.296 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.863 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:17.864 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:17.864 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:17.864 Found net devices under 0000:86:00.0: cvl_0_0 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:17.864 Found net devices under 0000:86:00.1: cvl_0_1 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.864 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.864 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.864 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.864 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.864 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.864 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:24:17.865 00:24:17.865 --- 10.0.0.2 ping statistics --- 00:24:17.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.865 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:17.865 00:24:17.865 --- 10.0.0.1 ping statistics --- 00:24:17.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.865 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=546839 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 546839 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 546839 ']' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 [2024-11-20 12:34:00.210429] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:24:17.865 [2024-11-20 12:34:00.210480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.865 [2024-11-20 12:34:00.277889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.865 [2024-11-20 12:34:00.322452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.865 [2024-11-20 12:34:00.322487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.865 [2024-11-20 12:34:00.322494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.865 [2024-11-20 12:34:00.322500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.865 [2024-11-20 12:34:00.322505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.865 [2024-11-20 12:34:00.323046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 [2024-11-20 12:34:00.470866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 [2024-11-20 12:34:00.483054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 null0 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 null1 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=547000 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 547000 /tmp/host.sock 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 547000 ']' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:17.865 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 [2024-11-20 12:34:00.562046] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:24:17.865 [2024-11-20 12:34:00.562087] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547000 ] 00:24:17.865 [2024-11-20 12:34:00.634865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.865 [2024-11-20 12:34:00.675749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.865 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.866 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 [2024-11-20 12:34:01.088603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:18.125 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:18.126 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:18.384 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:18.951 [2024-11-20 12:34:01.849516] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:18.951 [2024-11-20 12:34:01.849535] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:18.951 [2024-11-20 12:34:01.849547] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.951 [2024-11-20 12:34:01.976942] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:18.951 [2024-11-20 12:34:02.038514] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:18.951 [2024-11-20 12:34:02.039273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1655dd0:1 started. 00:24:18.951 [2024-11-20 12:34:02.040659] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:18.951 [2024-11-20 12:34:02.040674] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:18.951 [2024-11-20 12:34:02.047697] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1655dd0 was disconnected and freed. delete nvme_qpair. 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.211 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.471 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.729 [2024-11-20 12:34:02.731604] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16561a0:1 started. 00:24:19.730 [2024-11-20 12:34:02.739396] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16561a0 was disconnected and freed. delete nvme_qpair. 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.730 [2024-11-20 12:34:02.813331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.730 [2024-11-20 12:34:02.813912] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:19.730 [2024-11-20 12:34:02.813932] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:19.730 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.989 [2024-11-20 12:34:02.901545] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:19.989 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:20.248 [2024-11-20 12:34:03.124665] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:20.248 [2024-11-20 12:34:03.124699] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:20.248 [2024-11-20 12:34:03.124707] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:20.248 [2024-11-20 12:34:03.124712] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:21.185 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.185 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.186 [2024-11-20 12:34:04.069306] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:21.186 [2024-11-20 12:34:04.069328] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:21.186 [2024-11-20 12:34:04.075998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.186 [2024-11-20 12:34:04.076016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.186 [2024-11-20 12:34:04.076029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.186 [2024-11-20 12:34:04.076036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.186 [2024-11-20 12:34:04.076043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.186 [2024-11-20 12:34:04.076050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.186 [2024-11-20 12:34:04.076057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.186 [2024-11-20 12:34:04.076063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.186 [2024-11-20 12:34:04.076070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.186 [2024-11-20 12:34:04.086012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.186 [2024-11-20 12:34:04.096044] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.186 [2024-11-20 12:34:04.096055] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.186 [2024-11-20 12:34:04.096060] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.186 [2024-11-20 12:34:04.096065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.186 [2024-11-20 12:34:04.096080] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:21.186 [2024-11-20 12:34:04.096277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.186 [2024-11-20 12:34:04.096291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1626390 with addr=10.0.0.2, port=4420 00:24:21.186 [2024-11-20 12:34:04.096299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.186 [2024-11-20 12:34:04.096310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.186 [2024-11-20 12:34:04.096320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.186 [2024-11-20 12:34:04.096326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.186 [2024-11-20 12:34:04.096334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.186 [2024-11-20 12:34:04.096340] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:21.186 [2024-11-20 12:34:04.096345] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:21.186 [2024-11-20 12:34:04.096352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.186 [2024-11-20 12:34:04.106112] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.186 [2024-11-20 12:34:04.106122] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.186 [2024-11-20 12:34:04.106126] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.186 [2024-11-20 12:34:04.106130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.186 [2024-11-20 12:34:04.106143] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:21.186 [2024-11-20 12:34:04.106317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.186 [2024-11-20 12:34:04.106329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1626390 with addr=10.0.0.2, port=4420 00:24:21.186 [2024-11-20 12:34:04.106336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.186 [2024-11-20 12:34:04.106346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.186 [2024-11-20 12:34:04.106356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.186 [2024-11-20 12:34:04.106362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.186 [2024-11-20 12:34:04.106368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.186 [2024-11-20 12:34:04.106374] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:21.186 [2024-11-20 12:34:04.106378] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:21.186 [2024-11-20 12:34:04.106382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.186 [2024-11-20 12:34:04.116175] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.186 [2024-11-20 12:34:04.116188] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.186 [2024-11-20 12:34:04.116192] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.186 [2024-11-20 12:34:04.116197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.186 [2024-11-20 12:34:04.116210] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:21.186 [2024-11-20 12:34:04.116442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.186 [2024-11-20 12:34:04.116454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1626390 with addr=10.0.0.2, port=4420 00:24:21.186 [2024-11-20 12:34:04.116462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.186 [2024-11-20 12:34:04.116472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.186 [2024-11-20 12:34:04.116506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.186 [2024-11-20 12:34:04.116513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.186 [2024-11-20 12:34:04.116520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.186 [2024-11-20 12:34:04.116526] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:21.186 [2024-11-20 12:34:04.116530] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:21.186 [2024-11-20 12:34:04.116537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.186 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:21.187 [2024-11-20 12:34:04.126241] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.187 [2024-11-20 12:34:04.126254] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.187 [2024-11-20 12:34:04.126258] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.187 [2024-11-20 12:34:04.126262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.187 [2024-11-20 12:34:04.126274] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:21.187 [2024-11-20 12:34:04.126425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.187 [2024-11-20 12:34:04.126436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1626390 with addr=10.0.0.2, port=4420 00:24:21.187 [2024-11-20 12:34:04.126443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.187 [2024-11-20 12:34:04.126453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.187 [2024-11-20 12:34:04.126462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.187 [2024-11-20 12:34:04.126469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.187 [2024-11-20 12:34:04.126475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.187 [2024-11-20 12:34:04.126481] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:21.187 [2024-11-20 12:34:04.126485] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:21.187 [2024-11-20 12:34:04.126489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.187 [2024-11-20 12:34:04.136304] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.187 [2024-11-20 12:34:04.136316] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.187 [2024-11-20 12:34:04.136323] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.187 [2024-11-20 12:34:04.136327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.187 [2024-11-20 12:34:04.136340] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:21.187 [2024-11-20 12:34:04.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.187 [2024-11-20 12:34:04.136513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1626390 with addr=10.0.0.2, port=4420 00:24:21.187 [2024-11-20 12:34:04.136520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.187 [2024-11-20 12:34:04.136530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.187 [2024-11-20 12:34:04.137171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.187 [2024-11-20 12:34:04.137182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.187 [2024-11-20 12:34:04.137189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.187 [2024-11-20 12:34:04.137195] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:21.187 [2024-11-20 12:34:04.137199] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:21.187 [2024-11-20 12:34:04.137203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.187 [2024-11-20 12:34:04.146371] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.187 [2024-11-20 12:34:04.146382] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.187 [2024-11-20 12:34:04.146386] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.187 [2024-11-20 12:34:04.146389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.187 [2024-11-20 12:34:04.146402] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:21.187 [2024-11-20 12:34:04.146627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.187 [2024-11-20 12:34:04.146639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1626390 with addr=10.0.0.2, port=4420 00:24:21.187 [2024-11-20 12:34:04.146646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1626390 is same with the state(6) to be set 00:24:21.187 [2024-11-20 12:34:04.146656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1626390 (9): Bad file descriptor 00:24:21.187 [2024-11-20 12:34:04.146665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:21.187 [2024-11-20 12:34:04.146670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:21.187 [2024-11-20 12:34:04.146677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:21.187 [2024-11-20 12:34:04.146682] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:21.187 [2024-11-20 12:34:04.146686] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:21.187 [2024-11-20 12:34:04.146690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:21.187 [2024-11-20 12:34:04.156249] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:21.187 [2024-11-20 12:34:04.156271] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:21.187 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.188 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.447 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.383 [2024-11-20 12:34:05.488071] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:22.383 [2024-11-20 12:34:05.488088] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:22.383 [2024-11-20 12:34:05.488099] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:22.641 [2024-11-20 12:34:05.574358] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:22.641 [2024-11-20 12:34:05.674050] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:22.641 [2024-11-20 12:34:05.674568] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16546e0:1 started. 00:24:22.641 [2024-11-20 12:34:05.676149] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:22.641 [2024-11-20 12:34:05.676173] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.641 [2024-11-20 12:34:05.677628] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16546e0 was disconnected and freed. delete nvme_qpair. 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.641 request: 00:24:22.641 { 00:24:22.641 "name": "nvme", 00:24:22.641 "trtype": "tcp", 00:24:22.641 "traddr": "10.0.0.2", 00:24:22.641 "adrfam": "ipv4", 00:24:22.641 "trsvcid": "8009", 00:24:22.641 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.641 "wait_for_attach": true, 00:24:22.641 "method": "bdev_nvme_start_discovery", 00:24:22.641 "req_id": 1 00:24:22.641 } 00:24:22.641 Got JSON-RPC error response 00:24:22.641 response: 00:24:22.641 { 00:24:22.641 "code": -17, 00:24:22.641 "message": "File exists" 00:24:22.641 } 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.641 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.900 request: 00:24:22.900 { 00:24:22.900 "name": "nvme_second", 00:24:22.900 "trtype": "tcp", 00:24:22.900 "traddr": "10.0.0.2", 00:24:22.900 "adrfam": "ipv4", 00:24:22.900 "trsvcid": "8009", 00:24:22.900 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:22.900 "wait_for_attach": true, 00:24:22.900 "method": "bdev_nvme_start_discovery", 00:24:22.900 "req_id": 1 00:24:22.900 } 00:24:22.900 Got JSON-RPC error response 00:24:22.900 response: 00:24:22.900 { 00:24:22.900 "code": -17, 00:24:22.900 "message": "File exists" 00:24:22.900 } 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.900 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.835 [2024-11-20 12:34:06.915553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.835 [2024-11-20 12:34:06.915581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660ff0 with addr=10.0.0.2, port=8010 00:24:23.835 [2024-11-20 12:34:06.915595] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:23.835 [2024-11-20 12:34:06.915603] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:23.835 [2024-11-20 12:34:06.915610] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:25.211 [2024-11-20 12:34:07.917985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.211 [2024-11-20 12:34:07.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660ff0 with addr=10.0.0.2, port=8010 00:24:25.211 [2024-11-20 12:34:07.918022] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:25.211 [2024-11-20 12:34:07.918029] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:25.211 [2024-11-20 12:34:07.918035] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:26.148 [2024-11-20 12:34:08.920231] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:26.148 request: 00:24:26.148 { 00:24:26.148 "name": "nvme_second", 00:24:26.148 "trtype": "tcp", 00:24:26.148 "traddr": "10.0.0.2", 00:24:26.148 "adrfam": "ipv4", 00:24:26.148 "trsvcid": "8010", 00:24:26.148 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:26.148 "wait_for_attach": false, 00:24:26.148 "attach_timeout_ms": 3000, 00:24:26.148 "method": "bdev_nvme_start_discovery", 00:24:26.148 "req_id": 1 00:24:26.148 } 00:24:26.148 Got JSON-RPC error response 00:24:26.148 response: 00:24:26.148 { 00:24:26.148 "code": -110, 00:24:26.148 "message": "Connection timed out" 00:24:26.148 } 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 547000 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.148 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:26.148 rmmod nvme_tcp 00:24:26.148 rmmod nvme_fabrics 00:24:26.148 rmmod nvme_keyring 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 546839 ']' 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 546839 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 546839 ']' 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 546839 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 546839 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 546839' 00:24:26.148 killing process with pid 546839 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 546839 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 546839 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.148 12:34:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.683 00:24:28.683 real 0m17.314s 00:24:28.683 user 0m20.560s 00:24:28.683 sys 0m6.016s 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 ************************************ 00:24:28.683 END TEST nvmf_host_discovery 00:24:28.683 ************************************ 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 ************************************ 00:24:28.683 START TEST nvmf_host_multipath_status 00:24:28.683 ************************************ 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:28.683 * Looking for test storage... 00:24:28.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.683 --rc genhtml_branch_coverage=1 00:24:28.683 --rc genhtml_function_coverage=1 00:24:28.683 --rc genhtml_legend=1 00:24:28.683 --rc geninfo_all_blocks=1 00:24:28.683 --rc geninfo_unexecuted_blocks=1 00:24:28.683 00:24:28.683 ' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.683 --rc genhtml_branch_coverage=1 00:24:28.683 --rc genhtml_function_coverage=1 00:24:28.683 --rc genhtml_legend=1 00:24:28.683 --rc geninfo_all_blocks=1 00:24:28.683 --rc geninfo_unexecuted_blocks=1 00:24:28.683 00:24:28.683 ' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.683 --rc genhtml_branch_coverage=1 00:24:28.683 --rc genhtml_function_coverage=1 00:24:28.683 --rc genhtml_legend=1 00:24:28.683 --rc geninfo_all_blocks=1 00:24:28.683 --rc geninfo_unexecuted_blocks=1 00:24:28.683 00:24:28.683 ' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.683 --rc genhtml_branch_coverage=1 00:24:28.683 --rc genhtml_function_coverage=1 00:24:28.683 --rc genhtml_legend=1 00:24:28.683 --rc geninfo_all_blocks=1 00:24:28.683 --rc geninfo_unexecuted_blocks=1 00:24:28.683 00:24:28.683 ' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.683 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.684 12:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:35.255 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:35.255 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:35.255 Found net devices under 0000:86:00.0: cvl_0_0 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:35.255 Found net devices under 0000:86:00.1: cvl_0_1 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.255 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:24:35.256 00:24:35.256 --- 10.0.0.2 ping statistics --- 00:24:35.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.256 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:24:35.256 00:24:35.256 --- 10.0.0.1 ping statistics --- 00:24:35.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.256 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=552070 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 552070 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 552070 ']' 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.256 [2024-11-20 12:34:17.615366] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:24:35.256 [2024-11-20 12:34:17.615412] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.256 [2024-11-20 12:34:17.695497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:35.256 [2024-11-20 12:34:17.737187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.256 [2024-11-20 12:34:17.737224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.256 [2024-11-20 12:34:17.737231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.256 [2024-11-20 12:34:17.737237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.256 [2024-11-20 12:34:17.737242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.256 [2024-11-20 12:34:17.738365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.256 [2024-11-20 12:34:17.738368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=552070 00:24:35.256 12:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:35.256 [2024-11-20 12:34:18.046313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.256 12:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:35.256 Malloc0 00:24:35.256 12:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:35.514 12:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.773 12:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.773 [2024-11-20 12:34:18.864799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.031 12:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.032 [2024-11-20 12:34:19.069337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=552326 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 552326 /var/tmp/bdevperf.sock 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 552326 ']' 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.032 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:36.291 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.291 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:36.291 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:36.550 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:36.807 Nvme0n1 00:24:36.807 12:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:37.064 Nvme0n1 00:24:37.322 12:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:37.322 12:34:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:39.219 12:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:39.219 12:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:39.478 12:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:39.736 12:34:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:40.670 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:40.670 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.670 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.670 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.928 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.928 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.928 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.928 12:34:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.928 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.928 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.928 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.928 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.187 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.187 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.187 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.187 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.445 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.445 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.445 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.445 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.703 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.703 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.703 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.703 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.960 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.960 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:41.960 12:34:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:42.218 12:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.218 12:34:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.613 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:43.871 12:34:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.129 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.129 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.129 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.129 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.387 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.387 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.387 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.387 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.645 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.646 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:44.646 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.904 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:44.904 12:34:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.537 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.795 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.795 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.795 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.795 12:34:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.053 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.053 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.053 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.053 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.311 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.311 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:47.311 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:47.569 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:47.569 12:34:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.942 12:34:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.201 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.459 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.459 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.459 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.459 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.717 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.717 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.717 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.717 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.975 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.975 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:49.975 12:34:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.233 12:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.233 12:34:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.607 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.865 12:34:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.122 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.122 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:52.122 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.122 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.380 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.380 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.380 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.380 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.639 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.639 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:52.639 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:52.896 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.896 12:34:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:54.270 12:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:54.270 12:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:54.270 12:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.270 12:34:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.270 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.270 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.270 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.270 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.528 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.528 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.528 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.529 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.529 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.529 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.529 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.529 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.786 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.786 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:54.786 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.786 12:34:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.044 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.044 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.044 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.044 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.302 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.302 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:55.561 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:55.561 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:55.561 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.819 12:34:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:56.761 12:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:56.761 12:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:56.761 12:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.761 12:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.019 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.019 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:57.019 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.019 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.277 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.277 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.277 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.277 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.534 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.534 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.535 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.535 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:57.792 12:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.050 12:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.050 12:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:58.050 12:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.307 12:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.566 12:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:59.528 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:59.529 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.529 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.529 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.835 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.835 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.835 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.835 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.101 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.101 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.101 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.101 12:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.101 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.101 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.101 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.101 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.359 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.359 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.359 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.359 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.617 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.617 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.617 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.617 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.873 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.873 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:00.873 12:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:01.130 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:01.388 12:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:02.319 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:02.319 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.319 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.319 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.576 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.576 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.576 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.576 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.833 12:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.090 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.090 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.090 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.090 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.348 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.348 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.348 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.348 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.606 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.606 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:03.606 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.864 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:04.121 12:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:05.056 12:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:05.056 12:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.056 12:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.056 12:34:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.314 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.572 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.572 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.572 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.572 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.830 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.830 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.830 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.830 12:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.088 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.088 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.088 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.088 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 552326 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 552326 ']' 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 552326 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552326 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552326' 00:25:06.359 killing process with pid 552326 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 552326 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 552326 00:25:06.359 { 00:25:06.359 "results": [ 00:25:06.359 { 00:25:06.359 "job": "Nvme0n1", 00:25:06.359 "core_mask": "0x4", 00:25:06.359 "workload": "verify", 00:25:06.359 "status": "terminated", 00:25:06.359 "verify_range": { 00:25:06.359 "start": 0, 00:25:06.359 "length": 16384 00:25:06.359 }, 00:25:06.359 "queue_depth": 128, 00:25:06.359 "io_size": 4096, 00:25:06.359 "runtime": 28.942231, 00:25:06.359 "iops": 10415.679427062827, 00:25:06.359 "mibps": 40.68624776196417, 00:25:06.359 "io_failed": 0, 00:25:06.359 "io_timeout": 0, 00:25:06.359 "avg_latency_us": 12267.672130058778, 00:25:06.359 "min_latency_us": 555.6313043478261, 00:25:06.359 "max_latency_us": 3092843.2973913043 00:25:06.359 } 00:25:06.359 ], 00:25:06.359 "core_count": 1 00:25:06.359 } 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 552326 00:25:06.359 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.359 [2024-11-20 12:34:19.144966] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:25:06.359 [2024-11-20 12:34:19.145020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552326 ] 00:25:06.359 [2024-11-20 12:34:19.218798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.359 [2024-11-20 12:34:19.259837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.359 Running I/O for 90 seconds... 00:25:06.359 10972.00 IOPS, 42.86 MiB/s [2024-11-20T11:34:49.475Z] 11163.50 IOPS, 43.61 MiB/s [2024-11-20T11:34:49.475Z] 11175.00 IOPS, 43.65 MiB/s [2024-11-20T11:34:49.475Z] 11160.50 IOPS, 43.60 MiB/s [2024-11-20T11:34:49.475Z] 11183.80 IOPS, 43.69 MiB/s [2024-11-20T11:34:49.475Z] 11200.33 IOPS, 43.75 MiB/s [2024-11-20T11:34:49.475Z] 11222.00 IOPS, 43.84 MiB/s [2024-11-20T11:34:49.475Z] 11228.50 IOPS, 43.86 MiB/s [2024-11-20T11:34:49.475Z] 11230.00 IOPS, 43.87 MiB/s [2024-11-20T11:34:49.475Z] 11229.00 IOPS, 43.86 MiB/s [2024-11-20T11:34:49.475Z] 11218.00 IOPS, 43.82 MiB/s [2024-11-20T11:34:49.475Z] 11228.83 IOPS, 43.86 MiB/s [2024-11-20T11:34:49.475Z] [2024-11-20 12:34:33.103621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.359 [2024-11-20 12:34:33.103909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.359 [2024-11-20 12:34:33.103929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.359 [2024-11-20 12:34:33.103941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.360 [2024-11-20 12:34:33.103953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.103966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.360 [2024-11-20 12:34:33.103974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.103986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.360 [2024-11-20 12:34:33.103994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.360 [2024-11-20 12:34:33.104013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.104951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.104959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.360 [2024-11-20 12:34:33.105580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.360 [2024-11-20 12:34:33.105587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.105599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.105607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.105619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.105625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.105638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.105645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.106507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.106529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.106548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.106985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.106998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.361 [2024-11-20 12:34:33.107665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.361 [2024-11-20 12:34:33.107822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.361 [2024-11-20 12:34:33.107834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.107841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.107879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.107898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.362 [2024-11-20 12:34:33.107918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.362 [2024-11-20 12:34:33.107937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.362 [2024-11-20 12:34:33.107960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.362 [2024-11-20 12:34:33.107980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.107994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.362 [2024-11-20 12:34:33.108001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.108984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.108991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.109174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.362 [2024-11-20 12:34:33.109189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.362 [2024-11-20 12:34:33.120104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.120923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.120942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.120968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.120980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.120987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.363 [2024-11-20 12:34:33.121256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.363 [2024-11-20 12:34:33.121272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.363 [2024-11-20 12:34:33.121281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.121974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.121991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.122000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.122027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.122053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.122081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.122107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.364 [2024-11-20 12:34:33.122133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.364 [2024-11-20 12:34:33.122159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.364 [2024-11-20 12:34:33.122186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.364 [2024-11-20 12:34:33.122212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.364 [2024-11-20 12:34:33.122238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.364 [2024-11-20 12:34:33.122255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.364 [2024-11-20 12:34:33.122264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.122451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.365 [2024-11-20 12:34:33.122477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.365 [2024-11-20 12:34:33.122504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.365 [2024-11-20 12:34:33.122529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.122547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.365 [2024-11-20 12:34:33.122556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.365 [2024-11-20 12:34:33.123462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.123976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.123992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.124001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.124018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.124027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.124044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.124054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.124070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.124080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.124097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.365 [2024-11-20 12:34:33.124106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.365 [2024-11-20 12:34:33.124123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.124500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.124994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.366 [2024-11-20 12:34:33.125491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.366 [2024-11-20 12:34:33.125508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-11-20 12:34:33.125518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-11-20 12:34:33.125544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-11-20 12:34:33.125570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-11-20 12:34:33.125596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.125975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-11-20 12:34:33.126436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.367 [2024-11-20 12:34:33.126935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-11-20 12:34:33.126944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.126967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.126977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.126994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.127321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.127613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.127629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.133472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.133501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.133528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.133555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-11-20 12:34:33.133585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.368 [2024-11-20 12:34:33.133794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-11-20 12:34:33.133803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.133831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.133859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.133886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.133915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.133943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.133974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.133992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.134019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.134046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.134074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.134103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.134927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.134974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.134984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.369 [2024-11-20 12:34:33.135680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.369 [2024-11-20 12:34:33.135698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.135978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.135999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.136009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.136039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.136067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.136095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.136124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.136151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.136535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.136545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-11-20 12:34:33.137272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-11-20 12:34:33.137467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.370 [2024-11-20 12:34:33.137488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.137981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.137991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.371 [2024-11-20 12:34:33.138224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.371 [2024-11-20 12:34:33.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-11-20 12:34:33.138471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-11-20 12:34:33.138583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-11-20 12:34:33.138611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-11-20 12:34:33.138638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-11-20 12:34:33.138666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-11-20 12:34:33.138693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.138976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.138994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.139968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.139986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.372 [2024-11-20 12:34:33.140313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.372 [2024-11-20 12:34:33.140323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.140979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.140991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-11-20 12:34:33.141185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-11-20 12:34:33.141379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.373 [2024-11-20 12:34:33.141394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.141403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.141418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.141427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.141443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.141452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.141467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.141476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.141491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.141500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.141516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.141525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-11-20 12:34:33.142187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.374 [2024-11-20 12:34:33.142917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.374 [2024-11-20 12:34:33.142925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.142941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.142954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.142970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.142979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.142995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.143322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.143346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.143370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.143395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.375 [2024-11-20 12:34:33.143419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.375 [2024-11-20 12:34:33.143758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-11-20 12:34:33.143767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.143782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.143791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.143806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.143815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.143830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.143839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.144988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.144996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.376 [2024-11-20 12:34:33.145403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.376 [2024-11-20 12:34:33.145412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.145632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.145918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.145927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.377 [2024-11-20 12:34:33.146630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.377 [2024-11-20 12:34:33.146971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.377 [2024-11-20 12:34:33.146980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.146996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.378 [2024-11-20 12:34:33.147864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.378 [2024-11-20 12:34:33.147932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.378 [2024-11-20 12:34:33.147942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.147963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.147989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.147999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.148972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.148988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.379 [2024-11-20 12:34:33.149579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.379 [2024-11-20 12:34:33.149594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.149977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.149993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.150001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.150025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.150050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.150074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.380 [2024-11-20 12:34:33.150098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.150979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.150994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.151012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.151021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.151038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.380 [2024-11-20 12:34:33.151046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.380 [2024-11-20 12:34:33.151063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-11-20 12:34:33.151096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.381 [2024-11-20 12:34:33.151837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-11-20 12:34:33.151857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-11-20 12:34:33.151876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-11-20 12:34:33.151895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.381 [2024-11-20 12:34:33.151907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.381 [2024-11-20 12:34:33.151915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.151927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.151934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.151951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.151959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.151971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.151978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.151990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.151997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-11-20 12:34:33.152093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-11-20 12:34:33.152112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-11-20 12:34:33.152131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-11-20 12:34:33.152150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.382 [2024-11-20 12:34:33.152169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.152456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.152463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.382 [2024-11-20 12:34:33.153131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.382 [2024-11-20 12:34:33.153138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.383 [2024-11-20 12:34:33.153877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.383 [2024-11-20 12:34:33.153890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.384 [2024-11-20 12:34:33.153896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.153909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.384 [2024-11-20 12:34:33.153917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.153930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.153936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.153954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.153961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.153974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.153981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.153993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.384 [2024-11-20 12:34:33.154703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.384 [2024-11-20 12:34:33.154952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.384 [2024-11-20 12:34:33.154959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.154972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.154978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.154991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.154998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.385 [2024-11-20 12:34:33.155676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.385 [2024-11-20 12:34:33.155689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.385 [2024-11-20 12:34:33.155696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.155709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.155715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.386 [2024-11-20 12:34:33.156789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.386 [2024-11-20 12:34:33.156802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.156984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.156991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.387 [2024-11-20 12:34:33.157856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.157895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.157916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.157935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.157961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.157983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.157996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.158004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.158016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.158023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.158036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.158044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.158056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.158064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.158076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.158083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.158095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.387 [2024-11-20 12:34:33.158102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.387 [2024-11-20 12:34:33.158115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.388 [2024-11-20 12:34:33.158181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.388 [2024-11-20 12:34:33.158790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.388 [2024-11-20 12:34:33.158802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.158809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.158821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.158828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.158840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.158847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.158860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.158867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.158879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.158886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.159483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.159504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.159524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.159545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.389 [2024-11-20 12:34:33.159564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.389 [2024-11-20 12:34:33.159853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.389 [2024-11-20 12:34:33.159860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.159992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.160988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.161008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.161015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.161027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.390 [2024-11-20 12:34:33.161034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.390 [2024-11-20 12:34:33.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.391 [2024-11-20 12:34:33.161868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.161986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.161993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.391 [2024-11-20 12:34:33.162247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.391 [2024-11-20 12:34:33.162259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.162519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.162982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.162989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.163049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.163069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.163088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.163107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.392 [2024-11-20 12:34:33.163127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.392 [2024-11-20 12:34:33.163346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.392 [2024-11-20 12:34:33.163358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.163988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.163996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.393 [2024-11-20 12:34:33.164490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.393 [2024-11-20 12:34:33.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.164925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.164932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.165234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.165255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.394 [2024-11-20 12:34:33.165711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.165983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.165990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.166002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.166014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.166026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.166033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.166045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.166052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.394 [2024-11-20 12:34:33.166065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.394 [2024-11-20 12:34:33.166071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.395 [2024-11-20 12:34:33.166698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.166711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.166718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.395 [2024-11-20 12:34:33.167320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.395 [2024-11-20 12:34:33.167332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.167878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.396 [2024-11-20 12:34:33.168912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.396 [2024-11-20 12:34:33.168927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.397 [2024-11-20 12:34:33.168935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.168955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.397 [2024-11-20 12:34:33.168962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.168978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.168988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.397 [2024-11-20 12:34:33.169324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.169981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.169998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.170006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.397 [2024-11-20 12:34:33.170022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.397 [2024-11-20 12:34:33.170029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:33.170589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:33.170607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.398 [2024-11-20 12:34:33.170615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.398 11040.92 IOPS, 43.13 MiB/s [2024-11-20T11:34:49.514Z] 10252.29 IOPS, 40.05 MiB/s [2024-11-20T11:34:49.514Z] 9568.80 IOPS, 37.38 MiB/s [2024-11-20T11:34:49.514Z] 9081.62 IOPS, 35.48 MiB/s [2024-11-20T11:34:49.514Z] 9215.76 IOPS, 36.00 MiB/s [2024-11-20T11:34:49.514Z] 9333.33 IOPS, 36.46 MiB/s [2024-11-20T11:34:49.514Z] 9495.42 IOPS, 37.09 MiB/s [2024-11-20T11:34:49.514Z] 9684.20 IOPS, 37.83 MiB/s [2024-11-20T11:34:49.514Z] 9860.67 IOPS, 38.52 MiB/s [2024-11-20T11:34:49.514Z] 9924.59 IOPS, 38.77 MiB/s [2024-11-20T11:34:49.514Z] 9975.39 IOPS, 38.97 MiB/s [2024-11-20T11:34:49.514Z] 10032.25 IOPS, 39.19 MiB/s [2024-11-20T11:34:49.514Z] 10164.36 IOPS, 39.70 MiB/s [2024-11-20T11:34:49.514Z] 10286.08 IOPS, 40.18 MiB/s [2024-11-20T11:34:49.514Z] [2024-11-20 12:34:46.975279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:46.975317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:46.975351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:46.975359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:46.975373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:46.975380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:46.975394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:46.975401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:46.975413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:46.975420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:46.975439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.398 [2024-11-20 12:34:46.975446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.398 [2024-11-20 12:34:46.975459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.399 [2024-11-20 12:34:46.975466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.975478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.399 [2024-11-20 12:34:46.975486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.975498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.399 [2024-11-20 12:34:46.975505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.975518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.399 [2024-11-20 12:34:46.975524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.975537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.399 [2024-11-20 12:34:46.975544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.975557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.399 [2024-11-20 12:34:46.975564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.976851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.976859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.399 [2024-11-20 12:34:46.978826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.399 [2024-11-20 12:34:46.978834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.399 10371.19 IOPS, 40.51 MiB/s [2024-11-20T11:34:49.515Z] 10401.21 IOPS, 40.63 MiB/s [2024-11-20T11:34:49.515Z] Received shutdown signal, test time was about 28.942906 seconds 00:25:06.399 00:25:06.399 Latency(us) 00:25:06.399 [2024-11-20T11:34:49.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.399 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.399 Verification LBA range: start 0x0 length 0x4000 00:25:06.399 Nvme0n1 : 28.94 10415.68 40.69 0.00 0.00 12267.67 555.63 3092843.30 00:25:06.399 [2024-11-20T11:34:49.515Z] =================================================================================================================== 00:25:06.399 [2024-11-20T11:34:49.515Z] Total : 10415.68 40.69 0.00 0.00 12267.67 555.63 3092843.30 00:25:06.400 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.658 rmmod nvme_tcp 00:25:06.658 rmmod nvme_fabrics 00:25:06.658 rmmod nvme_keyring 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 552070 ']' 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 552070 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 552070 ']' 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 552070 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552070 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552070' 00:25:06.658 killing process with pid 552070 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 552070 00:25:06.658 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 552070 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.918 12:34:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.454 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.455 00:25:09.455 real 0m40.638s 00:25:09.455 user 1m50.079s 00:25:09.455 sys 0m11.625s 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.455 ************************************ 00:25:09.455 END TEST nvmf_host_multipath_status 00:25:09.455 ************************************ 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.455 ************************************ 00:25:09.455 START TEST nvmf_discovery_remove_ifc 00:25:09.455 ************************************ 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.455 * Looking for test storage... 00:25:09.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.455 --rc genhtml_branch_coverage=1 00:25:09.455 --rc genhtml_function_coverage=1 00:25:09.455 --rc genhtml_legend=1 00:25:09.455 --rc geninfo_all_blocks=1 00:25:09.455 --rc geninfo_unexecuted_blocks=1 00:25:09.455 00:25:09.455 ' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.455 --rc genhtml_branch_coverage=1 00:25:09.455 --rc genhtml_function_coverage=1 00:25:09.455 --rc genhtml_legend=1 00:25:09.455 --rc geninfo_all_blocks=1 00:25:09.455 --rc geninfo_unexecuted_blocks=1 00:25:09.455 00:25:09.455 ' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.455 --rc genhtml_branch_coverage=1 00:25:09.455 --rc genhtml_function_coverage=1 00:25:09.455 --rc genhtml_legend=1 00:25:09.455 --rc geninfo_all_blocks=1 00:25:09.455 --rc geninfo_unexecuted_blocks=1 00:25:09.455 00:25:09.455 ' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:09.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.455 --rc genhtml_branch_coverage=1 00:25:09.455 --rc genhtml_function_coverage=1 00:25:09.455 --rc genhtml_legend=1 00:25:09.455 --rc geninfo_all_blocks=1 00:25:09.455 --rc geninfo_unexecuted_blocks=1 00:25:09.455 00:25:09.455 ' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.455 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.456 12:34:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.027 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.028 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.028 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.028 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.028 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.028 12:34:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:25:16.028 00:25:16.028 --- 10.0.0.2 ping statistics --- 00:25:16.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.028 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:16.028 00:25:16.028 --- 10.0.0.1 ping statistics --- 00:25:16.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.028 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.028 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=560876 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 560876 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 560876 ']' 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 [2024-11-20 12:34:58.267856] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:25:16.029 [2024-11-20 12:34:58.267906] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.029 [2024-11-20 12:34:58.347537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.029 [2024-11-20 12:34:58.387341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.029 [2024-11-20 12:34:58.387378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.029 [2024-11-20 12:34:58.387385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.029 [2024-11-20 12:34:58.387391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.029 [2024-11-20 12:34:58.387417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.029 [2024-11-20 12:34:58.387992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 [2024-11-20 12:34:58.544025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.029 [2024-11-20 12:34:58.552223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:16.029 null0 00:25:16.029 [2024-11-20 12:34:58.584190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=561054 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 561054 /tmp/host.sock 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 561054 ']' 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:16.029 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 [2024-11-20 12:34:58.653989] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:25:16.029 [2024-11-20 12:34:58.654032] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561054 ] 00:25:16.029 [2024-11-20 12:34:58.726312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.029 [2024-11-20 12:34:58.771551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.029 12:34:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:16.964 [2024-11-20 12:34:59.920403] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:16.964 [2024-11-20 12:34:59.920426] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:16.964 [2024-11-20 12:34:59.920440] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.964 [2024-11-20 12:35:00.006706] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:17.223 [2024-11-20 12:35:00.223053] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:17.223 [2024-11-20 12:35:00.223961] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe739f0:1 started. 00:25:17.223 [2024-11-20 12:35:00.225346] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:17.223 [2024-11-20 12:35:00.225384] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:17.223 [2024-11-20 12:35:00.225402] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:17.223 [2024-11-20 12:35:00.225415] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:17.223 [2024-11-20 12:35:00.225434] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.223 [2024-11-20 12:35:00.229779] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe739f0 was disconnected and freed. delete nvme_qpair. 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.223 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:17.224 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:17.224 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:17.483 12:35:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.420 12:35:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.797 12:35:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.733 12:35:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.670 12:35:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.608 [2024-11-20 12:35:05.676825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:22.608 [2024-11-20 12:35:05.676862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.608 [2024-11-20 12:35:05.676872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.608 [2024-11-20 12:35:05.676881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.608 [2024-11-20 12:35:05.676888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.608 [2024-11-20 12:35:05.676895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.608 [2024-11-20 12:35:05.676902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.608 [2024-11-20 12:35:05.676909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.608 [2024-11-20 12:35:05.676920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.608 [2024-11-20 12:35:05.676928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.608 [2024-11-20 12:35:05.676934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.608 [2024-11-20 12:35:05.676940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe50220 is same with the state(6) to be set 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.608 12:35:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.608 [2024-11-20 12:35:05.686847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe50220 (9): Bad file descriptor 00:25:22.608 [2024-11-20 12:35:05.696881] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.608 [2024-11-20 12:35:05.696892] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.608 [2024-11-20 12:35:05.696896] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.608 [2024-11-20 12:35:05.696900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.609 [2024-11-20 12:35:05.696917] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.988 [2024-11-20 12:35:06.735978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:23.988 [2024-11-20 12:35:06.736057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe50220 with addr=10.0.0.2, port=4420 00:25:23.988 [2024-11-20 12:35:06.736087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe50220 is same with the state(6) to be set 00:25:23.988 [2024-11-20 12:35:06.736135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe50220 (9): Bad file descriptor 00:25:23.988 [2024-11-20 12:35:06.737071] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:23.988 [2024-11-20 12:35:06.737132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:23.988 [2024-11-20 12:35:06.737154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:23.988 [2024-11-20 12:35:06.737176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:23.988 [2024-11-20 12:35:06.737196] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:23.988 [2024-11-20 12:35:06.737212] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:23.988 [2024-11-20 12:35:06.737225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:23.988 [2024-11-20 12:35:06.737246] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:23.988 [2024-11-20 12:35:06.737270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:23.988 12:35:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.924 [2024-11-20 12:35:07.739790] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.924 [2024-11-20 12:35:07.739809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.924 [2024-11-20 12:35:07.739819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.924 [2024-11-20 12:35:07.739825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.924 [2024-11-20 12:35:07.739832] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:24.924 [2024-11-20 12:35:07.739838] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.924 [2024-11-20 12:35:07.739843] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.924 [2024-11-20 12:35:07.739846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.924 [2024-11-20 12:35:07.739865] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:24.924 [2024-11-20 12:35:07.739883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.924 [2024-11-20 12:35:07.739892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.924 [2024-11-20 12:35:07.739900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.924 [2024-11-20 12:35:07.739907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.924 [2024-11-20 12:35:07.739920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.924 [2024-11-20 12:35:07.739926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.924 [2024-11-20 12:35:07.739933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.924 [2024-11-20 12:35:07.739940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.924 [2024-11-20 12:35:07.739950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.924 [2024-11-20 12:35:07.739957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.924 [2024-11-20 12:35:07.739964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:24.924 [2024-11-20 12:35:07.740471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3f900 (9): Bad file descriptor 00:25:24.924 [2024-11-20 12:35:07.741483] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:24.924 [2024-11-20 12:35:07.741493] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:24.925 12:35:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.864 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.124 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:26.124 12:35:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.690 [2024-11-20 12:35:09.796024] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:26.690 [2024-11-20 12:35:09.796040] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:26.690 [2024-11-20 12:35:09.796054] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:26.949 [2024-11-20 12:35:09.882322] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.949 12:35:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.949 12:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.949 12:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:26.949 12:35:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.949 [2024-11-20 12:35:10.059274] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:26.949 [2024-11-20 12:35:10.060149] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xe44760:1 started. 00:25:26.949 [2024-11-20 12:35:10.061757] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:26.949 [2024-11-20 12:35:10.061789] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:26.949 [2024-11-20 12:35:10.061808] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:26.949 [2024-11-20 12:35:10.061822] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:26.949 [2024-11-20 12:35:10.061830] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:26.949 [2024-11-20 12:35:10.065265] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xe44760 was disconnected and freed. delete nvme_qpair. 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 561054 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 561054 ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 561054 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 561054 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 561054' 00:25:28.328 killing process with pid 561054 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 561054 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 561054 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.328 rmmod nvme_tcp 00:25:28.328 rmmod nvme_fabrics 00:25:28.328 rmmod nvme_keyring 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 560876 ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 560876 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 560876 ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 560876 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 560876 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 560876' 00:25:28.328 killing process with pid 560876 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 560876 00:25:28.328 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 560876 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.587 12:35:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.125 00:25:31.125 real 0m21.544s 00:25:31.125 user 0m26.973s 00:25:31.125 sys 0m5.816s 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:31.125 ************************************ 00:25:31.125 END TEST nvmf_discovery_remove_ifc 00:25:31.125 ************************************ 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.125 ************************************ 00:25:31.125 START TEST nvmf_identify_kernel_target 00:25:31.125 ************************************ 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:31.125 * Looking for test storage... 00:25:31.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.125 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.126 --rc genhtml_branch_coverage=1 00:25:31.126 --rc genhtml_function_coverage=1 00:25:31.126 --rc genhtml_legend=1 00:25:31.126 --rc geninfo_all_blocks=1 00:25:31.126 --rc geninfo_unexecuted_blocks=1 00:25:31.126 00:25:31.126 ' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.126 --rc genhtml_branch_coverage=1 00:25:31.126 --rc genhtml_function_coverage=1 00:25:31.126 --rc genhtml_legend=1 00:25:31.126 --rc geninfo_all_blocks=1 00:25:31.126 --rc geninfo_unexecuted_blocks=1 00:25:31.126 00:25:31.126 ' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.126 --rc genhtml_branch_coverage=1 00:25:31.126 --rc genhtml_function_coverage=1 00:25:31.126 --rc genhtml_legend=1 00:25:31.126 --rc geninfo_all_blocks=1 00:25:31.126 --rc geninfo_unexecuted_blocks=1 00:25:31.126 00:25:31.126 ' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:31.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.126 --rc genhtml_branch_coverage=1 00:25:31.126 --rc genhtml_function_coverage=1 00:25:31.126 --rc genhtml_legend=1 00:25:31.126 --rc geninfo_all_blocks=1 00:25:31.126 --rc geninfo_unexecuted_blocks=1 00:25:31.126 00:25:31.126 ' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.126 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.127 12:35:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.700 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.701 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.701 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:25:37.701 00:25:37.701 --- 10.0.0.2 ping statistics --- 00:25:37.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.701 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:25:37.701 00:25:37.701 --- 10.0.0.1 ping statistics --- 00:25:37.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.701 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:37.701 12:35:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:39.610 Waiting for block devices as requested 00:25:39.610 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:39.870 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:39.870 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:39.870 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:39.870 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:40.129 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.129 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.129 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.389 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.389 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:40.389 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:40.389 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:40.647 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:40.647 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.647 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.907 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.907 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:40.907 12:35:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:40.907 No valid GPT data, bailing 00:25:40.907 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:40.907 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:41.167 00:25:41.167 Discovery Log Number of Records 2, Generation counter 2 00:25:41.167 =====Discovery Log Entry 0====== 00:25:41.167 trtype: tcp 00:25:41.167 adrfam: ipv4 00:25:41.167 subtype: current discovery subsystem 00:25:41.167 treq: not specified, sq flow control disable supported 00:25:41.167 portid: 1 00:25:41.167 trsvcid: 4420 00:25:41.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:41.167 traddr: 10.0.0.1 00:25:41.167 eflags: none 00:25:41.167 sectype: none 00:25:41.167 =====Discovery Log Entry 1====== 00:25:41.167 trtype: tcp 00:25:41.167 adrfam: ipv4 00:25:41.167 subtype: nvme subsystem 00:25:41.167 treq: not specified, sq flow control disable supported 00:25:41.167 portid: 1 00:25:41.167 trsvcid: 4420 00:25:41.167 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:41.167 traddr: 10.0.0.1 00:25:41.167 eflags: none 00:25:41.167 sectype: none 00:25:41.167 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:41.167 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:41.167 ===================================================== 00:25:41.167 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:41.167 ===================================================== 00:25:41.167 Controller Capabilities/Features 00:25:41.167 ================================ 00:25:41.167 Vendor ID: 0000 00:25:41.167 Subsystem Vendor ID: 0000 00:25:41.167 Serial Number: 12c1fa1b150a50880230 00:25:41.167 Model Number: Linux 00:25:41.167 Firmware Version: 6.8.9-20 00:25:41.167 Recommended Arb Burst: 0 00:25:41.167 IEEE OUI Identifier: 00 00 00 00:25:41.167 Multi-path I/O 00:25:41.167 May have multiple subsystem ports: No 00:25:41.167 May have multiple controllers: No 00:25:41.167 Associated with SR-IOV VF: No 00:25:41.167 Max Data Transfer Size: Unlimited 00:25:41.167 Max Number of Namespaces: 0 00:25:41.167 Max Number of I/O Queues: 1024 00:25:41.167 NVMe Specification Version (VS): 1.3 00:25:41.167 NVMe Specification Version (Identify): 1.3 00:25:41.167 Maximum Queue Entries: 1024 00:25:41.167 Contiguous Queues Required: No 00:25:41.167 Arbitration Mechanisms Supported 00:25:41.167 Weighted Round Robin: Not Supported 00:25:41.167 Vendor Specific: Not Supported 00:25:41.167 Reset Timeout: 7500 ms 00:25:41.167 Doorbell Stride: 4 bytes 00:25:41.167 NVM Subsystem Reset: Not Supported 00:25:41.167 Command Sets Supported 00:25:41.167 NVM Command Set: Supported 00:25:41.167 Boot Partition: Not Supported 00:25:41.167 Memory Page Size Minimum: 4096 bytes 00:25:41.167 Memory Page Size Maximum: 4096 bytes 00:25:41.167 Persistent Memory Region: Not Supported 00:25:41.167 Optional Asynchronous Events Supported 00:25:41.167 Namespace Attribute Notices: Not Supported 00:25:41.167 Firmware Activation Notices: Not Supported 00:25:41.167 ANA Change Notices: Not Supported 00:25:41.167 PLE Aggregate Log Change Notices: Not Supported 00:25:41.167 LBA Status Info Alert Notices: Not Supported 00:25:41.167 EGE Aggregate Log Change Notices: Not Supported 00:25:41.167 Normal NVM Subsystem Shutdown event: Not Supported 00:25:41.167 Zone Descriptor Change Notices: Not Supported 00:25:41.167 Discovery Log Change Notices: Supported 00:25:41.167 Controller Attributes 00:25:41.167 128-bit Host Identifier: Not Supported 00:25:41.167 Non-Operational Permissive Mode: Not Supported 00:25:41.167 NVM Sets: Not Supported 00:25:41.167 Read Recovery Levels: Not Supported 00:25:41.167 Endurance Groups: Not Supported 00:25:41.167 Predictable Latency Mode: Not Supported 00:25:41.167 Traffic Based Keep ALive: Not Supported 00:25:41.167 Namespace Granularity: Not Supported 00:25:41.167 SQ Associations: Not Supported 00:25:41.167 UUID List: Not Supported 00:25:41.167 Multi-Domain Subsystem: Not Supported 00:25:41.167 Fixed Capacity Management: Not Supported 00:25:41.167 Variable Capacity Management: Not Supported 00:25:41.167 Delete Endurance Group: Not Supported 00:25:41.167 Delete NVM Set: Not Supported 00:25:41.167 Extended LBA Formats Supported: Not Supported 00:25:41.167 Flexible Data Placement Supported: Not Supported 00:25:41.167 00:25:41.167 Controller Memory Buffer Support 00:25:41.167 ================================ 00:25:41.167 Supported: No 00:25:41.167 00:25:41.167 Persistent Memory Region Support 00:25:41.167 ================================ 00:25:41.167 Supported: No 00:25:41.167 00:25:41.167 Admin Command Set Attributes 00:25:41.167 ============================ 00:25:41.167 Security Send/Receive: Not Supported 00:25:41.167 Format NVM: Not Supported 00:25:41.167 Firmware Activate/Download: Not Supported 00:25:41.167 Namespace Management: Not Supported 00:25:41.167 Device Self-Test: Not Supported 00:25:41.167 Directives: Not Supported 00:25:41.167 NVMe-MI: Not Supported 00:25:41.167 Virtualization Management: Not Supported 00:25:41.167 Doorbell Buffer Config: Not Supported 00:25:41.167 Get LBA Status Capability: Not Supported 00:25:41.167 Command & Feature Lockdown Capability: Not Supported 00:25:41.167 Abort Command Limit: 1 00:25:41.167 Async Event Request Limit: 1 00:25:41.167 Number of Firmware Slots: N/A 00:25:41.167 Firmware Slot 1 Read-Only: N/A 00:25:41.167 Firmware Activation Without Reset: N/A 00:25:41.167 Multiple Update Detection Support: N/A 00:25:41.167 Firmware Update Granularity: No Information Provided 00:25:41.167 Per-Namespace SMART Log: No 00:25:41.167 Asymmetric Namespace Access Log Page: Not Supported 00:25:41.167 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:41.167 Command Effects Log Page: Not Supported 00:25:41.167 Get Log Page Extended Data: Supported 00:25:41.167 Telemetry Log Pages: Not Supported 00:25:41.167 Persistent Event Log Pages: Not Supported 00:25:41.167 Supported Log Pages Log Page: May Support 00:25:41.167 Commands Supported & Effects Log Page: Not Supported 00:25:41.167 Feature Identifiers & Effects Log Page:May Support 00:25:41.167 NVMe-MI Commands & Effects Log Page: May Support 00:25:41.167 Data Area 4 for Telemetry Log: Not Supported 00:25:41.167 Error Log Page Entries Supported: 1 00:25:41.167 Keep Alive: Not Supported 00:25:41.167 00:25:41.167 NVM Command Set Attributes 00:25:41.167 ========================== 00:25:41.167 Submission Queue Entry Size 00:25:41.167 Max: 1 00:25:41.167 Min: 1 00:25:41.167 Completion Queue Entry Size 00:25:41.167 Max: 1 00:25:41.167 Min: 1 00:25:41.167 Number of Namespaces: 0 00:25:41.167 Compare Command: Not Supported 00:25:41.167 Write Uncorrectable Command: Not Supported 00:25:41.167 Dataset Management Command: Not Supported 00:25:41.167 Write Zeroes Command: Not Supported 00:25:41.167 Set Features Save Field: Not Supported 00:25:41.167 Reservations: Not Supported 00:25:41.167 Timestamp: Not Supported 00:25:41.167 Copy: Not Supported 00:25:41.167 Volatile Write Cache: Not Present 00:25:41.167 Atomic Write Unit (Normal): 1 00:25:41.168 Atomic Write Unit (PFail): 1 00:25:41.168 Atomic Compare & Write Unit: 1 00:25:41.168 Fused Compare & Write: Not Supported 00:25:41.168 Scatter-Gather List 00:25:41.168 SGL Command Set: Supported 00:25:41.168 SGL Keyed: Not Supported 00:25:41.168 SGL Bit Bucket Descriptor: Not Supported 00:25:41.168 SGL Metadata Pointer: Not Supported 00:25:41.168 Oversized SGL: Not Supported 00:25:41.168 SGL Metadata Address: Not Supported 00:25:41.168 SGL Offset: Supported 00:25:41.168 Transport SGL Data Block: Not Supported 00:25:41.168 Replay Protected Memory Block: Not Supported 00:25:41.168 00:25:41.168 Firmware Slot Information 00:25:41.168 ========================= 00:25:41.168 Active slot: 0 00:25:41.168 00:25:41.168 00:25:41.168 Error Log 00:25:41.168 ========= 00:25:41.168 00:25:41.168 Active Namespaces 00:25:41.168 ================= 00:25:41.168 Discovery Log Page 00:25:41.168 ================== 00:25:41.168 Generation Counter: 2 00:25:41.168 Number of Records: 2 00:25:41.168 Record Format: 0 00:25:41.168 00:25:41.168 Discovery Log Entry 0 00:25:41.168 ---------------------- 00:25:41.168 Transport Type: 3 (TCP) 00:25:41.168 Address Family: 1 (IPv4) 00:25:41.168 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:41.168 Entry Flags: 00:25:41.168 Duplicate Returned Information: 0 00:25:41.168 Explicit Persistent Connection Support for Discovery: 0 00:25:41.168 Transport Requirements: 00:25:41.168 Secure Channel: Not Specified 00:25:41.168 Port ID: 1 (0x0001) 00:25:41.168 Controller ID: 65535 (0xffff) 00:25:41.168 Admin Max SQ Size: 32 00:25:41.168 Transport Service Identifier: 4420 00:25:41.168 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:41.168 Transport Address: 10.0.0.1 00:25:41.168 Discovery Log Entry 1 00:25:41.168 ---------------------- 00:25:41.168 Transport Type: 3 (TCP) 00:25:41.168 Address Family: 1 (IPv4) 00:25:41.168 Subsystem Type: 2 (NVM Subsystem) 00:25:41.168 Entry Flags: 00:25:41.168 Duplicate Returned Information: 0 00:25:41.168 Explicit Persistent Connection Support for Discovery: 0 00:25:41.168 Transport Requirements: 00:25:41.168 Secure Channel: Not Specified 00:25:41.168 Port ID: 1 (0x0001) 00:25:41.168 Controller ID: 65535 (0xffff) 00:25:41.168 Admin Max SQ Size: 32 00:25:41.168 Transport Service Identifier: 4420 00:25:41.168 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:41.168 Transport Address: 10.0.0.1 00:25:41.168 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:41.427 get_feature(0x01) failed 00:25:41.427 get_feature(0x02) failed 00:25:41.427 get_feature(0x04) failed 00:25:41.427 ===================================================== 00:25:41.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:41.427 ===================================================== 00:25:41.427 Controller Capabilities/Features 00:25:41.427 ================================ 00:25:41.427 Vendor ID: 0000 00:25:41.428 Subsystem Vendor ID: 0000 00:25:41.428 Serial Number: c46fc927d65bbe27ee03 00:25:41.428 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:41.428 Firmware Version: 6.8.9-20 00:25:41.428 Recommended Arb Burst: 6 00:25:41.428 IEEE OUI Identifier: 00 00 00 00:25:41.428 Multi-path I/O 00:25:41.428 May have multiple subsystem ports: Yes 00:25:41.428 May have multiple controllers: Yes 00:25:41.428 Associated with SR-IOV VF: No 00:25:41.428 Max Data Transfer Size: Unlimited 00:25:41.428 Max Number of Namespaces: 1024 00:25:41.428 Max Number of I/O Queues: 128 00:25:41.428 NVMe Specification Version (VS): 1.3 00:25:41.428 NVMe Specification Version (Identify): 1.3 00:25:41.428 Maximum Queue Entries: 1024 00:25:41.428 Contiguous Queues Required: No 00:25:41.428 Arbitration Mechanisms Supported 00:25:41.428 Weighted Round Robin: Not Supported 00:25:41.428 Vendor Specific: Not Supported 00:25:41.428 Reset Timeout: 7500 ms 00:25:41.428 Doorbell Stride: 4 bytes 00:25:41.428 NVM Subsystem Reset: Not Supported 00:25:41.428 Command Sets Supported 00:25:41.428 NVM Command Set: Supported 00:25:41.428 Boot Partition: Not Supported 00:25:41.428 Memory Page Size Minimum: 4096 bytes 00:25:41.428 Memory Page Size Maximum: 4096 bytes 00:25:41.428 Persistent Memory Region: Not Supported 00:25:41.428 Optional Asynchronous Events Supported 00:25:41.428 Namespace Attribute Notices: Supported 00:25:41.428 Firmware Activation Notices: Not Supported 00:25:41.428 ANA Change Notices: Supported 00:25:41.428 PLE Aggregate Log Change Notices: Not Supported 00:25:41.428 LBA Status Info Alert Notices: Not Supported 00:25:41.428 EGE Aggregate Log Change Notices: Not Supported 00:25:41.428 Normal NVM Subsystem Shutdown event: Not Supported 00:25:41.428 Zone Descriptor Change Notices: Not Supported 00:25:41.428 Discovery Log Change Notices: Not Supported 00:25:41.428 Controller Attributes 00:25:41.428 128-bit Host Identifier: Supported 00:25:41.428 Non-Operational Permissive Mode: Not Supported 00:25:41.428 NVM Sets: Not Supported 00:25:41.428 Read Recovery Levels: Not Supported 00:25:41.428 Endurance Groups: Not Supported 00:25:41.428 Predictable Latency Mode: Not Supported 00:25:41.428 Traffic Based Keep ALive: Supported 00:25:41.428 Namespace Granularity: Not Supported 00:25:41.428 SQ Associations: Not Supported 00:25:41.428 UUID List: Not Supported 00:25:41.428 Multi-Domain Subsystem: Not Supported 00:25:41.428 Fixed Capacity Management: Not Supported 00:25:41.428 Variable Capacity Management: Not Supported 00:25:41.428 Delete Endurance Group: Not Supported 00:25:41.428 Delete NVM Set: Not Supported 00:25:41.428 Extended LBA Formats Supported: Not Supported 00:25:41.428 Flexible Data Placement Supported: Not Supported 00:25:41.428 00:25:41.428 Controller Memory Buffer Support 00:25:41.428 ================================ 00:25:41.428 Supported: No 00:25:41.428 00:25:41.428 Persistent Memory Region Support 00:25:41.428 ================================ 00:25:41.428 Supported: No 00:25:41.428 00:25:41.428 Admin Command Set Attributes 00:25:41.428 ============================ 00:25:41.428 Security Send/Receive: Not Supported 00:25:41.428 Format NVM: Not Supported 00:25:41.428 Firmware Activate/Download: Not Supported 00:25:41.428 Namespace Management: Not Supported 00:25:41.428 Device Self-Test: Not Supported 00:25:41.428 Directives: Not Supported 00:25:41.428 NVMe-MI: Not Supported 00:25:41.428 Virtualization Management: Not Supported 00:25:41.428 Doorbell Buffer Config: Not Supported 00:25:41.428 Get LBA Status Capability: Not Supported 00:25:41.428 Command & Feature Lockdown Capability: Not Supported 00:25:41.428 Abort Command Limit: 4 00:25:41.428 Async Event Request Limit: 4 00:25:41.428 Number of Firmware Slots: N/A 00:25:41.428 Firmware Slot 1 Read-Only: N/A 00:25:41.428 Firmware Activation Without Reset: N/A 00:25:41.428 Multiple Update Detection Support: N/A 00:25:41.428 Firmware Update Granularity: No Information Provided 00:25:41.428 Per-Namespace SMART Log: Yes 00:25:41.428 Asymmetric Namespace Access Log Page: Supported 00:25:41.428 ANA Transition Time : 10 sec 00:25:41.428 00:25:41.428 Asymmetric Namespace Access Capabilities 00:25:41.428 ANA Optimized State : Supported 00:25:41.428 ANA Non-Optimized State : Supported 00:25:41.428 ANA Inaccessible State : Supported 00:25:41.428 ANA Persistent Loss State : Supported 00:25:41.428 ANA Change State : Supported 00:25:41.428 ANAGRPID is not changed : No 00:25:41.428 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:41.428 00:25:41.428 ANA Group Identifier Maximum : 128 00:25:41.428 Number of ANA Group Identifiers : 128 00:25:41.428 Max Number of Allowed Namespaces : 1024 00:25:41.428 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:41.428 Command Effects Log Page: Supported 00:25:41.428 Get Log Page Extended Data: Supported 00:25:41.428 Telemetry Log Pages: Not Supported 00:25:41.428 Persistent Event Log Pages: Not Supported 00:25:41.428 Supported Log Pages Log Page: May Support 00:25:41.428 Commands Supported & Effects Log Page: Not Supported 00:25:41.428 Feature Identifiers & Effects Log Page:May Support 00:25:41.428 NVMe-MI Commands & Effects Log Page: May Support 00:25:41.428 Data Area 4 for Telemetry Log: Not Supported 00:25:41.428 Error Log Page Entries Supported: 128 00:25:41.428 Keep Alive: Supported 00:25:41.428 Keep Alive Granularity: 1000 ms 00:25:41.428 00:25:41.428 NVM Command Set Attributes 00:25:41.428 ========================== 00:25:41.428 Submission Queue Entry Size 00:25:41.428 Max: 64 00:25:41.428 Min: 64 00:25:41.428 Completion Queue Entry Size 00:25:41.428 Max: 16 00:25:41.428 Min: 16 00:25:41.428 Number of Namespaces: 1024 00:25:41.428 Compare Command: Not Supported 00:25:41.428 Write Uncorrectable Command: Not Supported 00:25:41.428 Dataset Management Command: Supported 00:25:41.428 Write Zeroes Command: Supported 00:25:41.428 Set Features Save Field: Not Supported 00:25:41.428 Reservations: Not Supported 00:25:41.428 Timestamp: Not Supported 00:25:41.428 Copy: Not Supported 00:25:41.428 Volatile Write Cache: Present 00:25:41.428 Atomic Write Unit (Normal): 1 00:25:41.428 Atomic Write Unit (PFail): 1 00:25:41.429 Atomic Compare & Write Unit: 1 00:25:41.429 Fused Compare & Write: Not Supported 00:25:41.429 Scatter-Gather List 00:25:41.429 SGL Command Set: Supported 00:25:41.429 SGL Keyed: Not Supported 00:25:41.429 SGL Bit Bucket Descriptor: Not Supported 00:25:41.429 SGL Metadata Pointer: Not Supported 00:25:41.429 Oversized SGL: Not Supported 00:25:41.429 SGL Metadata Address: Not Supported 00:25:41.429 SGL Offset: Supported 00:25:41.429 Transport SGL Data Block: Not Supported 00:25:41.429 Replay Protected Memory Block: Not Supported 00:25:41.429 00:25:41.429 Firmware Slot Information 00:25:41.429 ========================= 00:25:41.429 Active slot: 0 00:25:41.429 00:25:41.429 Asymmetric Namespace Access 00:25:41.429 =========================== 00:25:41.429 Change Count : 0 00:25:41.429 Number of ANA Group Descriptors : 1 00:25:41.429 ANA Group Descriptor : 0 00:25:41.429 ANA Group ID : 1 00:25:41.429 Number of NSID Values : 1 00:25:41.429 Change Count : 0 00:25:41.429 ANA State : 1 00:25:41.429 Namespace Identifier : 1 00:25:41.429 00:25:41.429 Commands Supported and Effects 00:25:41.429 ============================== 00:25:41.429 Admin Commands 00:25:41.429 -------------- 00:25:41.429 Get Log Page (02h): Supported 00:25:41.429 Identify (06h): Supported 00:25:41.429 Abort (08h): Supported 00:25:41.429 Set Features (09h): Supported 00:25:41.429 Get Features (0Ah): Supported 00:25:41.429 Asynchronous Event Request (0Ch): Supported 00:25:41.429 Keep Alive (18h): Supported 00:25:41.429 I/O Commands 00:25:41.429 ------------ 00:25:41.429 Flush (00h): Supported 00:25:41.429 Write (01h): Supported LBA-Change 00:25:41.429 Read (02h): Supported 00:25:41.429 Write Zeroes (08h): Supported LBA-Change 00:25:41.429 Dataset Management (09h): Supported 00:25:41.429 00:25:41.429 Error Log 00:25:41.429 ========= 00:25:41.429 Entry: 0 00:25:41.429 Error Count: 0x3 00:25:41.429 Submission Queue Id: 0x0 00:25:41.429 Command Id: 0x5 00:25:41.429 Phase Bit: 0 00:25:41.429 Status Code: 0x2 00:25:41.429 Status Code Type: 0x0 00:25:41.429 Do Not Retry: 1 00:25:41.429 Error Location: 0x28 00:25:41.429 LBA: 0x0 00:25:41.429 Namespace: 0x0 00:25:41.429 Vendor Log Page: 0x0 00:25:41.429 ----------- 00:25:41.429 Entry: 1 00:25:41.429 Error Count: 0x2 00:25:41.429 Submission Queue Id: 0x0 00:25:41.429 Command Id: 0x5 00:25:41.429 Phase Bit: 0 00:25:41.429 Status Code: 0x2 00:25:41.429 Status Code Type: 0x0 00:25:41.429 Do Not Retry: 1 00:25:41.429 Error Location: 0x28 00:25:41.429 LBA: 0x0 00:25:41.429 Namespace: 0x0 00:25:41.429 Vendor Log Page: 0x0 00:25:41.429 ----------- 00:25:41.429 Entry: 2 00:25:41.429 Error Count: 0x1 00:25:41.429 Submission Queue Id: 0x0 00:25:41.429 Command Id: 0x4 00:25:41.429 Phase Bit: 0 00:25:41.429 Status Code: 0x2 00:25:41.429 Status Code Type: 0x0 00:25:41.429 Do Not Retry: 1 00:25:41.429 Error Location: 0x28 00:25:41.429 LBA: 0x0 00:25:41.429 Namespace: 0x0 00:25:41.429 Vendor Log Page: 0x0 00:25:41.429 00:25:41.429 Number of Queues 00:25:41.429 ================ 00:25:41.429 Number of I/O Submission Queues: 128 00:25:41.429 Number of I/O Completion Queues: 128 00:25:41.429 00:25:41.429 ZNS Specific Controller Data 00:25:41.429 ============================ 00:25:41.429 Zone Append Size Limit: 0 00:25:41.429 00:25:41.429 00:25:41.429 Active Namespaces 00:25:41.429 ================= 00:25:41.429 get_feature(0x05) failed 00:25:41.429 Namespace ID:1 00:25:41.429 Command Set Identifier: NVM (00h) 00:25:41.429 Deallocate: Supported 00:25:41.429 Deallocated/Unwritten Error: Not Supported 00:25:41.429 Deallocated Read Value: Unknown 00:25:41.429 Deallocate in Write Zeroes: Not Supported 00:25:41.429 Deallocated Guard Field: 0xFFFF 00:25:41.429 Flush: Supported 00:25:41.429 Reservation: Not Supported 00:25:41.429 Namespace Sharing Capabilities: Multiple Controllers 00:25:41.429 Size (in LBAs): 1953525168 (931GiB) 00:25:41.429 Capacity (in LBAs): 1953525168 (931GiB) 00:25:41.429 Utilization (in LBAs): 1953525168 (931GiB) 00:25:41.429 UUID: 6c07f35e-ae40-4eb8-95f8-50901e47a64d 00:25:41.429 Thin Provisioning: Not Supported 00:25:41.429 Per-NS Atomic Units: Yes 00:25:41.429 Atomic Boundary Size (Normal): 0 00:25:41.429 Atomic Boundary Size (PFail): 0 00:25:41.429 Atomic Boundary Offset: 0 00:25:41.429 NGUID/EUI64 Never Reused: No 00:25:41.429 ANA group ID: 1 00:25:41.429 Namespace Write Protected: No 00:25:41.429 Number of LBA Formats: 1 00:25:41.429 Current LBA Format: LBA Format #00 00:25:41.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:41.429 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.429 rmmod nvme_tcp 00:25:41.429 rmmod nvme_fabrics 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.429 12:35:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:43.967 12:35:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.504 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.504 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.443 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:47.443 00:25:47.443 real 0m16.732s 00:25:47.443 user 0m4.282s 00:25:47.443 sys 0m8.816s 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.443 ************************************ 00:25:47.443 END TEST nvmf_identify_kernel_target 00:25:47.443 ************************************ 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.443 ************************************ 00:25:47.443 START TEST nvmf_auth_host 00:25:47.443 ************************************ 00:25:47.443 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:47.703 * Looking for test storage... 00:25:47.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.704 --rc genhtml_branch_coverage=1 00:25:47.704 --rc genhtml_function_coverage=1 00:25:47.704 --rc genhtml_legend=1 00:25:47.704 --rc geninfo_all_blocks=1 00:25:47.704 --rc geninfo_unexecuted_blocks=1 00:25:47.704 00:25:47.704 ' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.704 --rc genhtml_branch_coverage=1 00:25:47.704 --rc genhtml_function_coverage=1 00:25:47.704 --rc genhtml_legend=1 00:25:47.704 --rc geninfo_all_blocks=1 00:25:47.704 --rc geninfo_unexecuted_blocks=1 00:25:47.704 00:25:47.704 ' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.704 --rc genhtml_branch_coverage=1 00:25:47.704 --rc genhtml_function_coverage=1 00:25:47.704 --rc genhtml_legend=1 00:25:47.704 --rc geninfo_all_blocks=1 00:25:47.704 --rc geninfo_unexecuted_blocks=1 00:25:47.704 00:25:47.704 ' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.704 --rc genhtml_branch_coverage=1 00:25:47.704 --rc genhtml_function_coverage=1 00:25:47.704 --rc genhtml_legend=1 00:25:47.704 --rc geninfo_all_blocks=1 00:25:47.704 --rc geninfo_unexecuted_blocks=1 00:25:47.704 00:25:47.704 ' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.704 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.705 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:54.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:54.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.382 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:54.383 Found net devices under 0000:86:00.0: cvl_0_0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:54.383 Found net devices under 0000:86:00.1: cvl_0_1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:25:54.383 00:25:54.383 --- 10.0.0.2 ping statistics --- 00:25:54.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.383 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:54.383 00:25:54.383 --- 10.0.0.1 ping statistics --- 00:25:54.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.383 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=573112 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 573112 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 573112 ']' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=53b29ff3798e8fc81ddb4eb7b4e17177 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Qpc 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 53b29ff3798e8fc81ddb4eb7b4e17177 0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 53b29ff3798e8fc81ddb4eb7b4e17177 0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=53b29ff3798e8fc81ddb4eb7b4e17177 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.383 12:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Qpc 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Qpc 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Qpc 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=81ac129830e5407d1dc0959427a080dc3b81eeda1cec01c53539ae38ddcdafbe 00:25:54.383 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2uP 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 81ac129830e5407d1dc0959427a080dc3b81eeda1cec01c53539ae38ddcdafbe 3 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 81ac129830e5407d1dc0959427a080dc3b81eeda1cec01c53539ae38ddcdafbe 3 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=81ac129830e5407d1dc0959427a080dc3b81eeda1cec01c53539ae38ddcdafbe 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2uP 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2uP 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2uP 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5b415ddb76e528b0a0f5e795642a67d9ee405c507cc2e229 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Y93 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5b415ddb76e528b0a0f5e795642a67d9ee405c507cc2e229 0 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5b415ddb76e528b0a0f5e795642a67d9ee405c507cc2e229 0 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5b415ddb76e528b0a0f5e795642a67d9ee405c507cc2e229 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Y93 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Y93 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Y93 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=82c19cae75031d27a6b15e3112e5f1e680b96bd334bacae2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.o3B 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 82c19cae75031d27a6b15e3112e5f1e680b96bd334bacae2 2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 82c19cae75031d27a6b15e3112e5f1e680b96bd334bacae2 2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=82c19cae75031d27a6b15e3112e5f1e680b96bd334bacae2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.o3B 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.o3B 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.o3B 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba3bcddc45089b317df174ebda03dda9 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WUi 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba3bcddc45089b317df174ebda03dda9 1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba3bcddc45089b317df174ebda03dda9 1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba3bcddc45089b317df174ebda03dda9 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WUi 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WUi 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.WUi 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a9781a45283782b38ce6876044e059e 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VSV 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a9781a45283782b38ce6876044e059e 1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a9781a45283782b38ce6876044e059e 1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a9781a45283782b38ce6876044e059e 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VSV 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VSV 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VSV 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a81af97a1bee9806f83cf1606b40b79ef503f28b44dad38 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SXT 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a81af97a1bee9806f83cf1606b40b79ef503f28b44dad38 2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a81af97a1bee9806f83cf1606b40b79ef503f28b44dad38 2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a81af97a1bee9806f83cf1606b40b79ef503f28b44dad38 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SXT 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SXT 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SXT 00:25:54.384 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c37b5ecb571019f43c70e273bb62834 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PTZ 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c37b5ecb571019f43c70e273bb62834 0 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c37b5ecb571019f43c70e273bb62834 0 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c37b5ecb571019f43c70e273bb62834 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PTZ 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PTZ 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.PTZ 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1c55cd2b7c28b14259e728c8f3808393b13ff0d22af70a4d2555b91553fd42ff 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mqF 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1c55cd2b7c28b14259e728c8f3808393b13ff0d22af70a4d2555b91553fd42ff 3 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1c55cd2b7c28b14259e728c8f3808393b13ff0d22af70a4d2555b91553fd42ff 3 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1c55cd2b7c28b14259e728c8f3808393b13ff0d22af70a4d2555b91553fd42ff 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:54.385 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.644 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mqF 00:25:54.644 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mqF 00:25:54.644 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mqF 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 573112 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 573112 ']' 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Qpc 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2uP ]] 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2uP 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Y93 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.o3B ]] 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o3B 00:25:54.645 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.WUi 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VSV ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VSV 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SXT 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.PTZ ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.PTZ 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mqF 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.904 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:54.905 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:57.440 Waiting for block devices as requested 00:25:57.440 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:57.698 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.698 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:57.698 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:57.957 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:57.957 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:57.957 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:57.957 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:58.216 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:58.216 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:58.216 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:58.216 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:58.473 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:58.473 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:58.473 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:58.731 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:58.731 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:59.300 No valid GPT data, bailing 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:59.300 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:59.560 00:25:59.560 Discovery Log Number of Records 2, Generation counter 2 00:25:59.560 =====Discovery Log Entry 0====== 00:25:59.560 trtype: tcp 00:25:59.560 adrfam: ipv4 00:25:59.560 subtype: current discovery subsystem 00:25:59.560 treq: not specified, sq flow control disable supported 00:25:59.560 portid: 1 00:25:59.560 trsvcid: 4420 00:25:59.560 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:59.560 traddr: 10.0.0.1 00:25:59.560 eflags: none 00:25:59.560 sectype: none 00:25:59.560 =====Discovery Log Entry 1====== 00:25:59.560 trtype: tcp 00:25:59.560 adrfam: ipv4 00:25:59.560 subtype: nvme subsystem 00:25:59.560 treq: not specified, sq flow control disable supported 00:25:59.560 portid: 1 00:25:59.560 trsvcid: 4420 00:25:59.560 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:59.560 traddr: 10.0.0.1 00:25:59.560 eflags: none 00:25:59.560 sectype: none 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.560 nvme0n1 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.560 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.561 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.561 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.561 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.561 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.820 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.821 nvme0n1 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.821 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.080 12:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 nvme0n1 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.080 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.081 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.340 nvme0n1 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.340 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.600 nvme0n1 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.600 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.860 nvme0n1 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.860 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.861 12:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.121 nvme0n1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.121 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.380 nvme0n1 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.380 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.381 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.640 nvme0n1 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.640 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.899 nvme0n1 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.900 12:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.159 nvme0n1 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.159 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.160 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.160 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.160 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.160 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.160 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.160 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.419 nvme0n1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.419 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.678 nvme0n1 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.678 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.937 12:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.197 nvme0n1 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.197 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.457 nvme0n1 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.457 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.716 nvme0n1 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.716 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.717 12:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 nvme0n1 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:04.284 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.285 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.544 nvme0n1 00:26:04.544 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.544 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.544 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.544 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.544 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:04.803 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.804 12:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 nvme0n1 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:05.063 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.064 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 nvme0n1 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.632 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.633 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.633 12:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.210 nvme0n1 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.210 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.780 nvme0n1 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.780 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.781 12:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.349 nvme0n1 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.349 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.918 nvme0n1 00:26:07.918 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.918 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.918 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.918 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.918 12:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.918 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.918 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.918 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.918 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.177 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.177 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.178 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.746 nvme0n1 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.746 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.747 12:35:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.315 nvme0n1 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.315 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.574 nvme0n1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.575 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.834 nvme0n1 00:26:09.834 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.834 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.834 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.835 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.094 nvme0n1 00:26:10.094 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.094 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.094 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.094 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.094 12:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.094 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 nvme0n1 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.354 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.354 nvme0n1 00:26:10.355 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.355 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.355 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.355 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.355 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.355 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.614 nvme0n1 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.614 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.874 nvme0n1 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.874 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.132 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:11.133 12:35:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.133 nvme0n1 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.133 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.391 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.392 nvme0n1 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.392 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.650 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.651 nvme0n1 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.651 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.910 12:35:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.170 nvme0n1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.170 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.430 nvme0n1 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.430 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.688 nvme0n1 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.688 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.689 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.946 nvme0n1 00:26:12.946 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.946 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.946 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.946 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.946 12:35:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:12.946 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.205 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.463 nvme0n1 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.463 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.464 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.464 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.464 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.722 nvme0n1 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.722 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.981 12:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.240 nvme0n1 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.240 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 nvme0n1 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.806 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.807 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.807 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.807 12:35:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.065 nvme0n1 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.065 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.325 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.585 nvme0n1 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.585 12:35:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.154 nvme0n1 00:26:16.154 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.154 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.154 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.154 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.154 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.154 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.413 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.981 nvme0n1 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.981 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.982 12:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.551 nvme0n1 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.552 12:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.490 nvme0n1 00:26:18.490 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.490 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.490 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.490 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.491 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.059 nvme0n1 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.059 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.060 12:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.060 nvme0n1 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.060 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:19.319 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.320 nvme0n1 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.320 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.579 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.580 nvme0n1 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.580 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.840 nvme0n1 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.840 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.841 12:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 nvme0n1 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.100 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.101 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.101 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.101 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.101 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.101 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.101 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.359 nvme0n1 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.359 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.360 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.619 nvme0n1 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.619 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.879 nvme0n1 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.879 12:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.138 nvme0n1 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.138 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.139 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.398 nvme0n1 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.398 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 nvme0n1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.658 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.917 nvme0n1 00:26:21.917 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.917 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.917 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.917 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.917 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.917 12:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.917 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.917 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.917 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.917 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.176 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.436 nvme0n1 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.436 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.695 nvme0n1 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.695 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.954 nvme0n1 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.954 12:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.954 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.955 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.523 nvme0n1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.523 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 nvme0n1 00:26:23.782 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.782 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.782 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.782 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.782 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.782 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.040 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.041 12:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 nvme0n1 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.300 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.559 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 nvme0n1 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.817 12:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.384 nvme0n1 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTNiMjlmZjM3OThlOGZjODFkZGI0ZWI3YjRlMTcxNzcuV8jL: 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: ]] 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODFhYzEyOTgzMGU1NDA3ZDFkYzA5NTk0MjdhMDgwZGMzYjgxZWVkYTFjZWMwMWM1MzUzOWFlMzhkZGNkYWZiZRxpLOw=: 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.384 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.385 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.958 nvme0n1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.958 12:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.526 nvme0n1 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:26.526 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.785 12:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.352 nvme0n1 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWE4MWFmOTdhMWJlZTk4MDZmODNjZjE2MDZiNDBiNzllZjUwM2YyOGI0NGRhZDM4Jt4q6g==: 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: ]] 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGMzN2I1ZWNiNTcxMDE5ZjQzYzcwZTI3M2JiNjI4MzQW8XJP: 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:27.352 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.353 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.921 nvme0n1 00:26:27.921 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.921 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.921 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MWM1NWNkMmI3YzI4YjE0MjU5ZTcyOGM4ZjM4MDgzOTNiMTNmZjBkMjJhZjcwYTRkMjU1NWI5MTU1M2ZkNDJmZqZQEYE=: 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.922 12:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.489 nvme0n1 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.489 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:28.749 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.750 request: 00:26:28.750 { 00:26:28.750 "name": "nvme0", 00:26:28.750 "trtype": "tcp", 00:26:28.750 "traddr": "10.0.0.1", 00:26:28.750 "adrfam": "ipv4", 00:26:28.750 "trsvcid": "4420", 00:26:28.750 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.750 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.750 "prchk_reftag": false, 00:26:28.750 "prchk_guard": false, 00:26:28.750 "hdgst": false, 00:26:28.750 "ddgst": false, 00:26:28.750 "allow_unrecognized_csi": false, 00:26:28.750 "method": "bdev_nvme_attach_controller", 00:26:28.750 "req_id": 1 00:26:28.750 } 00:26:28.750 Got JSON-RPC error response 00:26:28.750 response: 00:26:28.750 { 00:26:28.750 "code": -5, 00:26:28.750 "message": "Input/output error" 00:26:28.750 } 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.750 request: 00:26:28.750 { 00:26:28.750 "name": "nvme0", 00:26:28.750 "trtype": "tcp", 00:26:28.750 "traddr": "10.0.0.1", 00:26:28.750 "adrfam": "ipv4", 00:26:28.750 "trsvcid": "4420", 00:26:28.750 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.750 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.750 "prchk_reftag": false, 00:26:28.750 "prchk_guard": false, 00:26:28.750 "hdgst": false, 00:26:28.750 "ddgst": false, 00:26:28.750 "dhchap_key": "key2", 00:26:28.750 "allow_unrecognized_csi": false, 00:26:28.750 "method": "bdev_nvme_attach_controller", 00:26:28.750 "req_id": 1 00:26:28.750 } 00:26:28.750 Got JSON-RPC error response 00:26:28.750 response: 00:26:28.750 { 00:26:28.750 "code": -5, 00:26:28.750 "message": "Input/output error" 00:26:28.750 } 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.750 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:28.751 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.751 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.009 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.009 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.009 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.009 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.009 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.010 request: 00:26:29.010 { 00:26:29.010 "name": "nvme0", 00:26:29.010 "trtype": "tcp", 00:26:29.010 "traddr": "10.0.0.1", 00:26:29.010 "adrfam": "ipv4", 00:26:29.010 "trsvcid": "4420", 00:26:29.010 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:29.010 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:29.010 "prchk_reftag": false, 00:26:29.010 "prchk_guard": false, 00:26:29.010 "hdgst": false, 00:26:29.010 "ddgst": false, 00:26:29.010 "dhchap_key": "key1", 00:26:29.010 "dhchap_ctrlr_key": "ckey2", 00:26:29.010 "allow_unrecognized_csi": false, 00:26:29.010 "method": "bdev_nvme_attach_controller", 00:26:29.010 "req_id": 1 00:26:29.010 } 00:26:29.010 Got JSON-RPC error response 00:26:29.010 response: 00:26:29.010 { 00:26:29.010 "code": -5, 00:26:29.010 "message": "Input/output error" 00:26:29.010 } 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.010 12:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.010 nvme0n1 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.010 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.268 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.268 request: 00:26:29.268 { 00:26:29.268 "name": "nvme0", 00:26:29.268 "dhchap_key": "key1", 00:26:29.268 "dhchap_ctrlr_key": "ckey2", 00:26:29.269 "method": "bdev_nvme_set_keys", 00:26:29.269 "req_id": 1 00:26:29.269 } 00:26:29.269 Got JSON-RPC error response 00:26:29.269 response: 00:26:29.269 { 00:26:29.269 "code": -13, 00:26:29.269 "message": "Permission denied" 00:26:29.269 } 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:29.269 12:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:30.205 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.205 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:30.205 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.205 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.464 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.464 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:30.464 12:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI0MTVkZGI3NmU1MjhiMGEwZjVlNzk1NjQyYTY3ZDllZTQwNWM1MDdjYzJlMjI5erF3YA==: 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: ]] 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJjMTljYWU3NTAzMWQyN2E2YjE1ZTMxMTJlNWYxZTY4MGI5NmJkMzM0YmFjYWUyY9I7zQ==: 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.401 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.693 nvme0n1 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmEzYmNkZGM0NTA4OWIzMTdkZjE3NGViZGEwM2RkYTkurhSk: 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: ]] 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGE5NzgxYTQ1MjgzNzgyYjM4Y2U2ODc2MDQ0ZTA1OWUw5mki: 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.693 request: 00:26:31.693 { 00:26:31.693 "name": "nvme0", 00:26:31.693 "dhchap_key": "key2", 00:26:31.693 "dhchap_ctrlr_key": "ckey1", 00:26:31.693 "method": "bdev_nvme_set_keys", 00:26:31.693 "req_id": 1 00:26:31.693 } 00:26:31.693 Got JSON-RPC error response 00:26:31.693 response: 00:26:31.693 { 00:26:31.693 "code": -13, 00:26:31.693 "message": "Permission denied" 00:26:31.693 } 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:31.693 12:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:32.737 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.738 rmmod nvme_tcp 00:26:32.738 rmmod nvme_fabrics 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 573112 ']' 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 573112 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 573112 ']' 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 573112 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 573112 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 573112' 00:26:32.738 killing process with pid 573112 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 573112 00:26:32.738 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 573112 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.996 12:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:35.534 12:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:38.070 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:38.070 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:39.008 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:39.008 12:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Qpc /tmp/spdk.key-null.Y93 /tmp/spdk.key-sha256.WUi /tmp/spdk.key-sha384.SXT /tmp/spdk.key-sha512.mqF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:39.008 12:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:42.300 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:42.300 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:42.300 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:42.300 00:26:42.300 real 0m54.387s 00:26:42.300 user 0m49.111s 00:26:42.300 sys 0m12.728s 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.300 ************************************ 00:26:42.300 END TEST nvmf_auth_host 00:26:42.300 ************************************ 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.300 ************************************ 00:26:42.300 START TEST nvmf_digest 00:26:42.300 ************************************ 00:26:42.300 12:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:42.300 * Looking for test storage... 00:26:42.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.300 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:42.300 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:42.300 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.301 --rc genhtml_branch_coverage=1 00:26:42.301 --rc genhtml_function_coverage=1 00:26:42.301 --rc genhtml_legend=1 00:26:42.301 --rc geninfo_all_blocks=1 00:26:42.301 --rc geninfo_unexecuted_blocks=1 00:26:42.301 00:26:42.301 ' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.301 --rc genhtml_branch_coverage=1 00:26:42.301 --rc genhtml_function_coverage=1 00:26:42.301 --rc genhtml_legend=1 00:26:42.301 --rc geninfo_all_blocks=1 00:26:42.301 --rc geninfo_unexecuted_blocks=1 00:26:42.301 00:26:42.301 ' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.301 --rc genhtml_branch_coverage=1 00:26:42.301 --rc genhtml_function_coverage=1 00:26:42.301 --rc genhtml_legend=1 00:26:42.301 --rc geninfo_all_blocks=1 00:26:42.301 --rc geninfo_unexecuted_blocks=1 00:26:42.301 00:26:42.301 ' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:42.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.301 --rc genhtml_branch_coverage=1 00:26:42.301 --rc genhtml_function_coverage=1 00:26:42.301 --rc genhtml_legend=1 00:26:42.301 --rc geninfo_all_blocks=1 00:26:42.301 --rc geninfo_unexecuted_blocks=1 00:26:42.301 00:26:42.301 ' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.301 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.302 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.302 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.302 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.302 12:36:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.879 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.879 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.879 12:36:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:26:48.879 00:26:48.879 --- 10.0.0.2 ping statistics --- 00:26:48.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.879 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:26:48.879 00:26:48.879 --- 10.0.0.1 ping statistics --- 00:26:48.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.879 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.879 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.880 ************************************ 00:26:48.880 START TEST nvmf_digest_clean 00:26:48.880 ************************************ 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=587594 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 587594 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 587594 ']' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.880 [2024-11-20 12:36:31.196772] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:26:48.880 [2024-11-20 12:36:31.196824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.880 [2024-11-20 12:36:31.275034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.880 [2024-11-20 12:36:31.313675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.880 [2024-11-20 12:36:31.313711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.880 [2024-11-20 12:36:31.313717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.880 [2024-11-20 12:36:31.313723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.880 [2024-11-20 12:36:31.313728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.880 [2024-11-20 12:36:31.314311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.880 null0 00:26:48.880 [2024-11-20 12:36:31.476877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.880 [2024-11-20 12:36:31.501106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=587631 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 587631 /var/tmp/bperf.sock 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 587631 ']' 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.880 [2024-11-20 12:36:31.553268] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:26:48.880 [2024-11-20 12:36:31.553310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587631 ] 00:26:48.880 [2024-11-20 12:36:31.627625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.880 [2024-11-20 12:36:31.669940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.880 12:36:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:49.139 nvme0n1 00:26:49.139 12:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:49.139 12:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.398 Running I/O for 2 seconds... 00:26:51.272 24738.00 IOPS, 96.63 MiB/s [2024-11-20T11:36:34.388Z] 24681.50 IOPS, 96.41 MiB/s 00:26:51.272 Latency(us) 00:26:51.272 [2024-11-20T11:36:34.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.272 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:51.272 nvme0n1 : 2.00 24697.70 96.48 0.00 0.00 5177.92 2592.95 17096.35 00:26:51.272 [2024-11-20T11:36:34.388Z] =================================================================================================================== 00:26:51.272 [2024-11-20T11:36:34.388Z] Total : 24697.70 96.48 0.00 0.00 5177.92 2592.95 17096.35 00:26:51.272 { 00:26:51.272 "results": [ 00:26:51.272 { 00:26:51.272 "job": "nvme0n1", 00:26:51.272 "core_mask": "0x2", 00:26:51.272 "workload": "randread", 00:26:51.272 "status": "finished", 00:26:51.272 "queue_depth": 128, 00:26:51.272 "io_size": 4096, 00:26:51.272 "runtime": 2.003871, 00:26:51.272 "iops": 24697.69760628304, 00:26:51.272 "mibps": 96.47538127454312, 00:26:51.272 "io_failed": 0, 00:26:51.272 "io_timeout": 0, 00:26:51.272 "avg_latency_us": 5177.921063487169, 00:26:51.272 "min_latency_us": 2592.946086956522, 00:26:51.272 "max_latency_us": 17096.347826086956 00:26:51.272 } 00:26:51.272 ], 00:26:51.272 "core_count": 1 00:26:51.272 } 00:26:51.272 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:51.272 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:51.272 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:51.272 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:51.272 | select(.opcode=="crc32c") 00:26:51.272 | "\(.module_name) \(.executed)"' 00:26:51.272 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 587631 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 587631 ']' 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 587631 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587631 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587631' 00:26:51.532 killing process with pid 587631 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 587631 00:26:51.532 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.532 00:26:51.532 Latency(us) 00:26:51.532 [2024-11-20T11:36:34.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.532 [2024-11-20T11:36:34.648Z] =================================================================================================================== 00:26:51.532 [2024-11-20T11:36:34.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.532 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 587631 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=588102 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 588102 /var/tmp/bperf.sock 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 588102 ']' 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.791 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.791 [2024-11-20 12:36:34.820969] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:26:51.791 [2024-11-20 12:36:34.821021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588102 ] 00:26:51.791 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.791 Zero copy mechanism will not be used. 00:26:51.791 [2024-11-20 12:36:34.898124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.050 [2024-11-20 12:36:34.937088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.050 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.050 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:52.050 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:52.050 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:52.050 12:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:52.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.309 12:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.568 nvme0n1 00:26:52.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:52.568 12:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.826 Zero copy mechanism will not be used. 00:26:52.826 Running I/O for 2 seconds... 00:26:54.700 5770.00 IOPS, 721.25 MiB/s [2024-11-20T11:36:37.816Z] 5709.00 IOPS, 713.62 MiB/s 00:26:54.700 Latency(us) 00:26:54.700 [2024-11-20T11:36:37.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.700 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:54.700 nvme0n1 : 2.00 5711.32 713.92 0.00 0.00 2798.92 837.01 6439.62 00:26:54.700 [2024-11-20T11:36:37.816Z] =================================================================================================================== 00:26:54.700 [2024-11-20T11:36:37.816Z] Total : 5711.32 713.92 0.00 0.00 2798.92 837.01 6439.62 00:26:54.700 { 00:26:54.700 "results": [ 00:26:54.700 { 00:26:54.700 "job": "nvme0n1", 00:26:54.700 "core_mask": "0x2", 00:26:54.700 "workload": "randread", 00:26:54.700 "status": "finished", 00:26:54.700 "queue_depth": 16, 00:26:54.700 "io_size": 131072, 00:26:54.700 "runtime": 2.001989, 00:26:54.700 "iops": 5711.320092168338, 00:26:54.700 "mibps": 713.9150115210423, 00:26:54.700 "io_failed": 0, 00:26:54.700 "io_timeout": 0, 00:26:54.700 "avg_latency_us": 2798.9171870318123, 00:26:54.700 "min_latency_us": 837.008695652174, 00:26:54.700 "max_latency_us": 6439.624347826087 00:26:54.700 } 00:26:54.700 ], 00:26:54.700 "core_count": 1 00:26:54.700 } 00:26:54.700 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:54.700 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:54.700 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.700 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.700 | select(.opcode=="crc32c") 00:26:54.700 | "\(.module_name) \(.executed)"' 00:26:54.700 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 588102 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 588102 ']' 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 588102 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.960 12:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588102 00:26:54.960 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:54.960 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:54.960 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588102' 00:26:54.960 killing process with pid 588102 00:26:54.960 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 588102 00:26:54.960 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.960 00:26:54.960 Latency(us) 00:26:54.960 [2024-11-20T11:36:38.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.960 [2024-11-20T11:36:38.076Z] =================================================================================================================== 00:26:54.960 [2024-11-20T11:36:38.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.960 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 588102 00:26:55.219 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:55.219 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:55.219 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:55.219 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=588744 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 588744 /var/tmp/bperf.sock 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 588744 ']' 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.220 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:55.220 [2024-11-20 12:36:38.224319] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:26:55.220 [2024-11-20 12:36:38.224366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588744 ] 00:26:55.220 [2024-11-20 12:36:38.298300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.479 [2024-11-20 12:36:38.341170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.479 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.479 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:55.479 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:55.479 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:55.479 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.736 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.736 12:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.995 nvme0n1 00:26:55.995 12:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:55.995 12:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.995 Running I/O for 2 seconds... 00:26:58.318 27281.00 IOPS, 106.57 MiB/s [2024-11-20T11:36:41.434Z] 27385.00 IOPS, 106.97 MiB/s 00:26:58.318 Latency(us) 00:26:58.318 [2024-11-20T11:36:41.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.318 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:58.318 nvme0n1 : 2.00 27416.11 107.09 0.00 0.00 4664.56 2265.27 16070.57 00:26:58.318 [2024-11-20T11:36:41.434Z] =================================================================================================================== 00:26:58.318 [2024-11-20T11:36:41.434Z] Total : 27416.11 107.09 0.00 0.00 4664.56 2265.27 16070.57 00:26:58.318 { 00:26:58.318 "results": [ 00:26:58.318 { 00:26:58.318 "job": "nvme0n1", 00:26:58.318 "core_mask": "0x2", 00:26:58.318 "workload": "randwrite", 00:26:58.318 "status": "finished", 00:26:58.318 "queue_depth": 128, 00:26:58.318 "io_size": 4096, 00:26:58.318 "runtime": 2.002399, 00:26:58.318 "iops": 27416.11437081221, 00:26:58.318 "mibps": 107.0941967609852, 00:26:58.318 "io_failed": 0, 00:26:58.318 "io_timeout": 0, 00:26:58.318 "avg_latency_us": 4664.559136675605, 00:26:58.318 "min_latency_us": 2265.2660869565216, 00:26:58.318 "max_latency_us": 16070.56695652174 00:26:58.318 } 00:26:58.318 ], 00:26:58.318 "core_count": 1 00:26:58.318 } 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:58.318 | select(.opcode=="crc32c") 00:26:58.318 | "\(.module_name) \(.executed)"' 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:58.318 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 588744 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 588744 ']' 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 588744 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588744 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588744' 00:26:58.319 killing process with pid 588744 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 588744 00:26:58.319 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.319 00:26:58.319 Latency(us) 00:26:58.319 [2024-11-20T11:36:41.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.319 [2024-11-20T11:36:41.435Z] =================================================================================================================== 00:26:58.319 [2024-11-20T11:36:41.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.319 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 588744 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=589264 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 589264 /var/tmp/bperf.sock 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 589264 ']' 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.578 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.578 [2024-11-20 12:36:41.593541] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:26:58.578 [2024-11-20 12:36:41.593589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589264 ] 00:26:58.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.578 Zero copy mechanism will not be used. 00:26:58.578 [2024-11-20 12:36:41.667446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.838 [2024-11-20 12:36:41.705802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.838 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.838 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:58.838 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.838 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.838 12:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:59.097 12:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.097 12:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.355 nvme0n1 00:26:59.355 12:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:59.355 12:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.614 Zero copy mechanism will not be used. 00:26:59.614 Running I/O for 2 seconds... 00:27:01.489 6580.00 IOPS, 822.50 MiB/s [2024-11-20T11:36:44.605Z] 6415.00 IOPS, 801.88 MiB/s 00:27:01.489 Latency(us) 00:27:01.489 [2024-11-20T11:36:44.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.489 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:01.489 nvme0n1 : 2.00 6411.13 801.39 0.00 0.00 2491.18 1837.86 10371.78 00:27:01.489 [2024-11-20T11:36:44.605Z] =================================================================================================================== 00:27:01.489 [2024-11-20T11:36:44.605Z] Total : 6411.13 801.39 0.00 0.00 2491.18 1837.86 10371.78 00:27:01.489 { 00:27:01.489 "results": [ 00:27:01.489 { 00:27:01.489 "job": "nvme0n1", 00:27:01.489 "core_mask": "0x2", 00:27:01.489 "workload": "randwrite", 00:27:01.489 "status": "finished", 00:27:01.489 "queue_depth": 16, 00:27:01.489 "io_size": 131072, 00:27:01.489 "runtime": 2.003702, 00:27:01.489 "iops": 6411.132992830271, 00:27:01.489 "mibps": 801.3916241037839, 00:27:01.489 "io_failed": 0, 00:27:01.489 "io_timeout": 0, 00:27:01.489 "avg_latency_us": 2491.1772794779627, 00:27:01.489 "min_latency_us": 1837.8573913043479, 00:27:01.489 "max_latency_us": 10371.784347826087 00:27:01.489 } 00:27:01.489 ], 00:27:01.489 "core_count": 1 00:27:01.489 } 00:27:01.489 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:01.489 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:01.489 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:01.489 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:01.489 | select(.opcode=="crc32c") 00:27:01.489 | "\(.module_name) \(.executed)"' 00:27:01.489 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.748 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.748 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.748 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.748 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 589264 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 589264 ']' 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 589264 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589264 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589264' 00:27:01.749 killing process with pid 589264 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 589264 00:27:01.749 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.749 00:27:01.749 Latency(us) 00:27:01.749 [2024-11-20T11:36:44.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.749 [2024-11-20T11:36:44.865Z] =================================================================================================================== 00:27:01.749 [2024-11-20T11:36:44.865Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.749 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 589264 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 587594 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 587594 ']' 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 587594 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587594 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587594' 00:27:02.009 killing process with pid 587594 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 587594 00:27:02.009 12:36:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 587594 00:27:02.268 00:27:02.268 real 0m14.023s 00:27:02.268 user 0m26.893s 00:27:02.268 sys 0m4.543s 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.268 ************************************ 00:27:02.268 END TEST nvmf_digest_clean 00:27:02.268 ************************************ 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:02.268 ************************************ 00:27:02.268 START TEST nvmf_digest_error 00:27:02.268 ************************************ 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=589900 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 589900 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 589900 ']' 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.268 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.268 [2024-11-20 12:36:45.286994] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:02.268 [2024-11-20 12:36:45.287037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.268 [2024-11-20 12:36:45.367665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.528 [2024-11-20 12:36:45.409431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.528 [2024-11-20 12:36:45.409466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.528 [2024-11-20 12:36:45.409473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.528 [2024-11-20 12:36:45.409479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.528 [2024-11-20 12:36:45.409484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.528 [2024-11-20 12:36:45.410057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.528 [2024-11-20 12:36:45.478497] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.528 null0 00:27:02.528 [2024-11-20 12:36:45.573874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.528 [2024-11-20 12:36:45.598093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=590006 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 590006 /var/tmp/bperf.sock 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 590006 ']' 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.528 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.787 [2024-11-20 12:36:45.648671] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:02.787 [2024-11-20 12:36:45.648709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590006 ] 00:27:02.787 [2024-11-20 12:36:45.721546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.787 [2024-11-20 12:36:45.763791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.787 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.787 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:02.787 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.787 12:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.046 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.046 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.046 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.046 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.046 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.046 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.305 nvme0n1 00:27:03.305 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:03.305 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.305 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.565 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.565 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:03.565 12:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.565 Running I/O for 2 seconds... 00:27:03.565 [2024-11-20 12:36:46.540206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.540239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.540251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.552552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.552581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.552591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.561059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.561082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.561090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.571967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.571997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.583252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.583274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.583283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.591769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.591790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.591798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.602958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.602988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.612094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.612115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.612123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.622635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.622656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.622665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.631820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.631840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.631848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.642122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.642143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.642152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.651639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.651660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.651668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.660847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.660868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.660877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.565 [2024-11-20 12:36:46.670292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.565 [2024-11-20 12:36:46.670313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.565 [2024-11-20 12:36:46.670321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.825 [2024-11-20 12:36:46.680864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.825 [2024-11-20 12:36:46.680886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.825 [2024-11-20 12:36:46.680894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.825 [2024-11-20 12:36:46.690247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.825 [2024-11-20 12:36:46.690269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.825 [2024-11-20 12:36:46.690276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.825 [2024-11-20 12:36:46.701743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.825 [2024-11-20 12:36:46.701764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.825 [2024-11-20 12:36:46.701771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.825 [2024-11-20 12:36:46.710838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.825 [2024-11-20 12:36:46.710859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.825 [2024-11-20 12:36:46.710867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.825 [2024-11-20 12:36:46.721124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.721149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.721157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.730647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.730667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.730676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.741250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.741279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.749565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.749586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.749594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.761524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.761546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.761554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.772860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.772881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.772889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.780643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.780663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.780671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.790943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.790970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.790978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.803418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.803440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.803448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.815515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.815536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.815545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.825773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.825792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.825800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.836965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.836985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.836993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.847875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.847897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.847904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.857205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.857226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.857234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.869298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.869318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.869326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.882123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.882144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.882152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.894766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.894786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.894794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.902994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.903014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.903028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.913619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.913639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.913647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.925971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.925991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.925999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.826 [2024-11-20 12:36:46.935695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:03.826 [2024-11-20 12:36:46.935715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.826 [2024-11-20 12:36:46.935723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:46.944127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:46.944147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:46.944156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:46.956138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:46.956158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:46.956166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:46.969017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:46.969037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:46.969045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:46.980689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:46.980709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:46.980717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:46.991727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:46.991747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:46.991754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.004325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.004349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.004357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.016938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.016962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.016970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.033605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.033624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.033632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.046248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.046268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.046276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.054935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.054961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.054969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.065446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.065467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.065475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.076024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.076044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.076051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.083933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.083960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.083968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.093964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.093984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.093992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.105406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.105426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.105433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.116833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.116853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.116861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.127722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.127741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.127749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.135881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.135901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.146930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.146955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.146964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.158228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.158248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.158257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.166839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.166863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.166871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.179507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.179527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.179536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.190795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.190816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.190828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.086 [2024-11-20 12:36:47.199192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.086 [2024-11-20 12:36:47.199212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.086 [2024-11-20 12:36:47.199220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.346 [2024-11-20 12:36:47.210044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.346 [2024-11-20 12:36:47.210064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.346 [2024-11-20 12:36:47.210072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.221080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.221100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.221109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.232220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.232239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.232247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.240447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.240466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.240474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.251693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.251713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.251721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.261358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.261378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.261386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.272272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.272292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.272300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.280495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.280515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.280522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.292394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.292415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.292423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.304744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.304764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.304772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.315185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.315205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.315212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.324052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.324073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.324081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.337205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.337226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.337234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.348487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.348506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.348514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.360994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.361015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.361023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.373762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.373783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.373795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.381556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.381583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.393589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.393609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.393617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.404669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.404689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.404698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.413788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.413808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.413816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.425820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.425839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.425847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.437254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.437273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.437281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.449977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.449997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.347 [2024-11-20 12:36:47.460126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.347 [2024-11-20 12:36:47.460147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.347 [2024-11-20 12:36:47.460156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.469678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.469701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.469710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.481818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.481837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.481845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.492963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.492983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.492991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.502415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.502434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.502442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.513516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.513540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.513548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.522730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.522750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.522758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 23716.00 IOPS, 92.64 MiB/s [2024-11-20T11:36:47.723Z] [2024-11-20 12:36:47.533426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.533445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.533453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.545960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.545982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.545990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.554412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.554432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.554441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.564822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.564843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.564851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.577650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.577671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.577679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.589045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.589066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.589075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.598272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.598293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.598301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.608632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.608651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.608659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.616346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.616365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.616373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.626509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.626530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.626538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.636011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.636031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.636039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.646589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.646609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.646620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.654822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.654843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.654851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.664922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.664942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.664955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.675622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.675641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.675648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.684049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.684069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.684077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.694892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.607 [2024-11-20 12:36:47.694911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.607 [2024-11-20 12:36:47.694918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.607 [2024-11-20 12:36:47.707408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.608 [2024-11-20 12:36:47.707428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.608 [2024-11-20 12:36:47.707436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.608 [2024-11-20 12:36:47.719211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.608 [2024-11-20 12:36:47.719232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.608 [2024-11-20 12:36:47.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.729413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.729442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.738175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.738195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.738203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.748333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.748354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.748362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.758709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.758732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.758741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.767254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.767275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.767283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.777624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.777645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.777653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.788959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.788979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.788988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.797162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.797181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.797189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.807856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.807885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.816761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.816782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.816794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.827368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.827388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.827396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.836888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.836909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.836918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.845862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.845882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.845890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.868 [2024-11-20 12:36:47.855746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.868 [2024-11-20 12:36:47.855767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.868 [2024-11-20 12:36:47.855776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.865841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.865863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.865871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.875335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.875355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.875363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.884408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.884429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.884437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.893362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.893383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.893391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.905466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.905491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.905499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.913914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.913935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.913943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.925512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.925534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.925542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.936067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.936088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.936096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.944555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.944576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.944583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.955381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.955403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.955411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.965610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.965630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.965638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.869 [2024-11-20 12:36:47.976016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:04.869 [2024-11-20 12:36:47.976037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.869 [2024-11-20 12:36:47.976045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:47.984921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:47.984941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:47.984956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:47.994409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:47.994429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:47.994436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.004683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.004704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.004712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.013594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.013615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.013623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.023892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.023912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.023920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.033929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.033955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.033963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.043521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.043540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.043548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.052092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.052113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.052121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.061318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.061339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.061347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.071114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.071134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.071147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.082044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.082065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.082073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.093857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.093877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.093885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.103281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.103301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.103310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.114529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.114550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.114558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.127819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.127839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.127847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.140288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.140308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.140316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.150680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.150702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.150710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.161523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.161545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.161553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.169296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.169317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.169325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.181127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.181149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.181157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.194194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.194216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.194225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.204217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.204238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.204246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.212784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.212804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.212812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.223617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.223638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.223646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.129 [2024-11-20 12:36:48.231786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.129 [2024-11-20 12:36:48.231807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.129 [2024-11-20 12:36:48.231816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.130 [2024-11-20 12:36:48.244008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.130 [2024-11-20 12:36:48.244030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.130 [2024-11-20 12:36:48.244038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.252900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.252921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.389 [2024-11-20 12:36:48.252935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.261616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.261637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.389 [2024-11-20 12:36:48.261646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.272551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.272574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.389 [2024-11-20 12:36:48.272583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.284318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.284340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.389 [2024-11-20 12:36:48.284348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.292976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.292997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.389 [2024-11-20 12:36:48.293005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.305447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.305469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.389 [2024-11-20 12:36:48.305477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.389 [2024-11-20 12:36:48.315484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.389 [2024-11-20 12:36:48.315506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.315514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.326853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.326874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.326882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.336337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.336359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.336367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.346499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.346523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.346531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.357069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.357091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.357100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.368163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.368184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.368191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.376874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.376894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.376902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.385813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.385834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.385842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.395711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.395732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.395740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.404836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.404855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.404863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.415058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.415079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.415087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.425476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.425496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.425504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.433738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.433757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.433765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.444767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.444787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.444795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.456274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.456295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.456304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.465495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.465524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.473407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.473428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.473437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.483956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.483977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.483986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.493992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.494013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.494022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.390 [2024-11-20 12:36:48.502937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.390 [2024-11-20 12:36:48.502966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.390 [2024-11-20 12:36:48.502975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.649 [2024-11-20 12:36:48.512242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.649 [2024-11-20 12:36:48.512263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.649 [2024-11-20 12:36:48.512275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.649 [2024-11-20 12:36:48.522597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2196370) 00:27:05.649 [2024-11-20 12:36:48.522618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.649 [2024-11-20 12:36:48.522626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.649 24525.00 IOPS, 95.80 MiB/s 00:27:05.649 Latency(us) 00:27:05.649 [2024-11-20T11:36:48.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.649 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:05.649 nvme0n1 : 2.00 24546.52 95.88 0.00 0.00 5209.40 2607.19 18464.06 00:27:05.649 [2024-11-20T11:36:48.765Z] =================================================================================================================== 00:27:05.649 [2024-11-20T11:36:48.765Z] Total : 24546.52 95.88 0.00 0.00 5209.40 2607.19 18464.06 00:27:05.649 { 00:27:05.649 "results": [ 00:27:05.649 { 00:27:05.649 "job": "nvme0n1", 00:27:05.649 "core_mask": "0x2", 00:27:05.649 "workload": "randread", 00:27:05.649 "status": "finished", 00:27:05.649 "queue_depth": 128, 00:27:05.649 "io_size": 4096, 00:27:05.649 "runtime": 2.003461, 00:27:05.649 "iops": 24546.52224325804, 00:27:05.649 "mibps": 95.88485251272672, 00:27:05.649 "io_failed": 0, 00:27:05.649 "io_timeout": 0, 00:27:05.649 "avg_latency_us": 5209.396842066177, 00:27:05.649 "min_latency_us": 2607.1930434782607, 00:27:05.649 "max_latency_us": 18464.055652173913 00:27:05.649 } 00:27:05.649 ], 00:27:05.649 "core_count": 1 00:27:05.649 } 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.649 | .driver_specific 00:27:05.649 | .nvme_error 00:27:05.649 | .status_code 00:27:05.649 | .command_transient_transport_error' 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 590006 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 590006 ']' 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 590006 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.649 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590006 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590006' 00:27:05.909 killing process with pid 590006 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 590006 00:27:05.909 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.909 00:27:05.909 Latency(us) 00:27:05.909 [2024-11-20T11:36:49.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.909 [2024-11-20T11:36:49.025Z] =================================================================================================================== 00:27:05.909 [2024-11-20T11:36:49.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 590006 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=590481 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 590481 /var/tmp/bperf.sock 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 590481 ']' 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.909 12:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.909 [2024-11-20 12:36:49.008219] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:05.909 [2024-11-20 12:36:49.008271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590481 ] 00:27:05.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.909 Zero copy mechanism will not be used. 00:27:06.169 [2024-11-20 12:36:49.083255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.169 [2024-11-20 12:36:49.120857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.169 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.169 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:06.169 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.169 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.429 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:06.429 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.429 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.429 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.429 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.429 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.688 nvme0n1 00:27:06.688 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:06.688 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.688 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.688 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.688 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:06.688 12:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.948 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.948 Zero copy mechanism will not be used. 00:27:06.948 Running I/O for 2 seconds... 00:27:06.948 [2024-11-20 12:36:49.881500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.881538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.881549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.887400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.887425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.890543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.890566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.890574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.896230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.896254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.896262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.901774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.901796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.901805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.907558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.907580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.907589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.913428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.913451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.913460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.919282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.919304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.919313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.925147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.948 [2024-11-20 12:36:49.925170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.948 [2024-11-20 12:36:49.925178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.948 [2024-11-20 12:36:49.930825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.930847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.930856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.936729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.936751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.936760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.942355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.942378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.942386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.947917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.947939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.947952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.953375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.953397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.953405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.958855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.958877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.958889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.964651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.964674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.964682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.970167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.970190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.970199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.976151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.976173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.976181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.981986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.982007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.982016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.987810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.987833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.987841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.993354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.993376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.993384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:49.999116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:49.999138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:49.999146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.004811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.004834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.004843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.010553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.010580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.010588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.016342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.016368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.016376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.022239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.022262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.022272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.027987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.028009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.028017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.033529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.033550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.033558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.039326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.039349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.039356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.045127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.045150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.050966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.050987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.050996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.056687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.056710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.056719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.949 [2024-11-20 12:36:50.062445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:06.949 [2024-11-20 12:36:50.062469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.949 [2024-11-20 12:36:50.062478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.068212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.068235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.068243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.074057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.074079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.074087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.079868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.079891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.079898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.085718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.085741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.085750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.091421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.091443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.091452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.097787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.097810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.097818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.103707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.103730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.103738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.109641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.109663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.109679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.115439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.115462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.115470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.121116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.121139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.121147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.126959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.126982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.126990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.132565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.132587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.132595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.138232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.138255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.138263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.143988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.144010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.144018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.149832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.149854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.149862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.155549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.155571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.155580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.161076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.161102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.161110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.209 [2024-11-20 12:36:50.166765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.209 [2024-11-20 12:36:50.166787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.209 [2024-11-20 12:36:50.166796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.172449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.172471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.172480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.178038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.178059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.178068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.183795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.183817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.183826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.189714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.189737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.189746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.195438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.195460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.195468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.201102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.201124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.201132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.206825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.206855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.212652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.212675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.212683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.218376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.218407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.224166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.224188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.224196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.229987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.230009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.230018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.235943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.235971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.235979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.241824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.241847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.241855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.247620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.247642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.247651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.253366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.253389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.253397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.259092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.259114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.259126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.264738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.264760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.264768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.270446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.270468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.270477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.276016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.276038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.276046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.281620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.281642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.281651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.287009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.287032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.287040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.292486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.292508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.292517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.298081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.298103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.298111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.303598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.303621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.303629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.308932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.308965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.308974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.314545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.314567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.314575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.210 [2024-11-20 12:36:50.320066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.210 [2024-11-20 12:36:50.320087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.210 [2024-11-20 12:36:50.320096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.470 [2024-11-20 12:36:50.325615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.470 [2024-11-20 12:36:50.325638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.470 [2024-11-20 12:36:50.325646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.470 [2024-11-20 12:36:50.331127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.470 [2024-11-20 12:36:50.331150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.470 [2024-11-20 12:36:50.331159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.470 [2024-11-20 12:36:50.336727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.470 [2024-11-20 12:36:50.336749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.470 [2024-11-20 12:36:50.336758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.342196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.342218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.342226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.347556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.347578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.347586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.352899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.352921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.352929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.358217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.358242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.358251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.363608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.363631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.363639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.369286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.369310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.369319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.374972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.374995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.375004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.380470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.380493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.380501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.386079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.386102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.386110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.391331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.391354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.391362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.396672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.396696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.396705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.402078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.402101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.402113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.407689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.407712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.407721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.413218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.413240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.413248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.418629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.418651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.418659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.423986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.424008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.424016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.429153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.429176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.429184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.434274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.434296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.434304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.439603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.439626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.439634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.444907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.444929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.444938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.450228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.450254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.450262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.455537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.455559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.455568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.460795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.460818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.460827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.466117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.466139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.466147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.471434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.471456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.471465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.476759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.476781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.471 [2024-11-20 12:36:50.476789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.471 [2024-11-20 12:36:50.482166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.471 [2024-11-20 12:36:50.482188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.482197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.487487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.487509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.487518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.492750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.492773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.492781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.498187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.498208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.498217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.503520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.503542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.503551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.508887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.508909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.508918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.514270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.514292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.514300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.519620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.519643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.519651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.524977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.524999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.525008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.530266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.530288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.530296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.535600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.535622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.535630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.540910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.540932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.540943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.546248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.546269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.546278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.551637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.551658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.551666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.557007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.557029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.557037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.562312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.562334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.562342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.567642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.567664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.567672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.572934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.572963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.572972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.578166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.578188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.578195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.472 [2024-11-20 12:36:50.583482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.472 [2024-11-20 12:36:50.583504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.472 [2024-11-20 12:36:50.583511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.588747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.588773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.588781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.594091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.594113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.594121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.599452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.599474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.599481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.604751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.604773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.604781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.610043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.610064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.610072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.615351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.615373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.615381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.620672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.620694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.620702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.625971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.625993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.626001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.631283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.631305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.631314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.636591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.636613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.641966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.641988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.641997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.647367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.647390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.647400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.652714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.652736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.732 [2024-11-20 12:36:50.658072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.732 [2024-11-20 12:36:50.658094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.732 [2024-11-20 12:36:50.658103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.663381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.663404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.663413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.668762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.668785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.668794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.674176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.674199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.674207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.679509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.679531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.679544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.684920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.684942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.684960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.690312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.690334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.690343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.695647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.695670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.695679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.701030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.701052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.701060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.706425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.706447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.706456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.711778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.711800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.711809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.717148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.717170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.717178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.722501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.722523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.722531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.727839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.727861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.727870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.733195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.733217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.733225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.738719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.738743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.738751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.744099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.744122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.744131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.749430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.749452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.749461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.754794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.733 [2024-11-20 12:36:50.754816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.733 [2024-11-20 12:36:50.754824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.733 [2024-11-20 12:36:50.760120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.760144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.760152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.765709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.765734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.765743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.771272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.771295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.771308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.776656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.776678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.776687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.781982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.782003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.782011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.787399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.792820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.792843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.792851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.798243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.798266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.798274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.803654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.803677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.803685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.809068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.809091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.809099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.814383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.814406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.814415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.819768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.819795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.819804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.825180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.825210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.830621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.830643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.830651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.836073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.836095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.836104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.841483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.841505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.841514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.734 [2024-11-20 12:36:50.846916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.734 [2024-11-20 12:36:50.846939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.734 [2024-11-20 12:36:50.846955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.852278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.852301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.852311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.857623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.857648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.857658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.862958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.862980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.862989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.868615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.868637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.868645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.995 5572.00 IOPS, 696.50 MiB/s [2024-11-20T11:36:51.111Z] [2024-11-20 12:36:50.876603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.876625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.876634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.884051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.884075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.884083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.890999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.891022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.891030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.897279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.897302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.897310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.903329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.903352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.903361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.909483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.909506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.909514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.916930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.916964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.916973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.923805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.923828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.923841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.929755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.929780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.929789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.936814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.936838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.936846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.944034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.944057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.944066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.951869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.951901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.958586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.958611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.958620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.965252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.965280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.965289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.972259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.972284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.972293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.981004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.981028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.981036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.995 [2024-11-20 12:36:50.988190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.995 [2024-11-20 12:36:50.988219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.995 [2024-11-20 12:36:50.988227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:50.994091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:50.994115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:50.994124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.000260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.000284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.000292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.006971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.006994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.007002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.014241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.014265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.014274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.022316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.022340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.022350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.030163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.030187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.030195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.037876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.037901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.045706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.045731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.045741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.053492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.053524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.060012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.060037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.060046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.067604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.067628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.067637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.074206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.074230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.074238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.081527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.081550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.081559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.089033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.089058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.089066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.096793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.096818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.096827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.102879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.102902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.102910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:07.996 [2024-11-20 12:36:51.108615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:07.996 [2024-11-20 12:36:51.108638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.996 [2024-11-20 12:36:51.108651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.114347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.114370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.114378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.120002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.120026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.120034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.125671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.125694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.125701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.131921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.131944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.131958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.138194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.138217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.138225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.144609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.144632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.144640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.151061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.151092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.157358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.157381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.163545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.163572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.163580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.169972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.169995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.170004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.176272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.176294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.176302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.183164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.183186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.183194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.189631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.259 [2024-11-20 12:36:51.189655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-20 12:36:51.189663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.259 [2024-11-20 12:36:51.193385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.193406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.193414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.197955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.197977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.197986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.203699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.203721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.203729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.209163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.209185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.209193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.214619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.214642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.214650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.220090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.220112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.220120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.225651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.225673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.225681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.231030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.231051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.231060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.236285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.236306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.236314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.241439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.241460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.241468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.246912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.246934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.246943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.252537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.252558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.252567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.258207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.258228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.258240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.263826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.263848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.263857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.269264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.269286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.269294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.274572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.274594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.274604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.280416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.280438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.280446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.286012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.286033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.286042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.291596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.291618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.291626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.297229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.297251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.297259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.302688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.302710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.302719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.308424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.308445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.308453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.313662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.313685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.313693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.319139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.319162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.319170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.324528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.324549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.324557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.330035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.330056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.330064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.260 [2024-11-20 12:36:51.335534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.260 [2024-11-20 12:36:51.335555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-20 12:36:51.335563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.261 [2024-11-20 12:36:51.341181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.261 [2024-11-20 12:36:51.341203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-20 12:36:51.341211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.261 [2024-11-20 12:36:51.346799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.261 [2024-11-20 12:36:51.346821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-20 12:36:51.346829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.261 [2024-11-20 12:36:51.352497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.261 [2024-11-20 12:36:51.352519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-20 12:36:51.352530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.261 [2024-11-20 12:36:51.358195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.261 [2024-11-20 12:36:51.358217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-20 12:36:51.358226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.261 [2024-11-20 12:36:51.363882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.261 [2024-11-20 12:36:51.363904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-20 12:36:51.363912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.261 [2024-11-20 12:36:51.369619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.261 [2024-11-20 12:36:51.369641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-20 12:36:51.369648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.521 [2024-11-20 12:36:51.375352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.375374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.375382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.381358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.381380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.381388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.388140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.388163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.388171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.395523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.395547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.395555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.401846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.401869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.401878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.408337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.408364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.408372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.414862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.414884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.414892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.421259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.421281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.421290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.428568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.428590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.428599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.435875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.435898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.435907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.443308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.443331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.451298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.451321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.451331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.459315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.459338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.459347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.465754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.465776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.465785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.471412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.471435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.471443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.476699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.476721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.476729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.482130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.482152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.482160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.487746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.487768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.487776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.493400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.493422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.493429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.498895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.498916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.498924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.504276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.504299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.504307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.509670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.509693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.509701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.515005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.515027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.522 [2024-11-20 12:36:51.515039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.522 [2024-11-20 12:36:51.520481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.522 [2024-11-20 12:36:51.520504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.520512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.526056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.526077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.526085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.531478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.531501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.531510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.536904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.536926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.536934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.542104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.542127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.542136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.548054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.548076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.548084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.554039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.554062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.554070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.559812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.559836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.559845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.565029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.565055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.565063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.570626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.570647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.570655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.576305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.576327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.576335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.581795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.581817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.581825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.587227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.587249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.587258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.592706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.592727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.592735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.598242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.598264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.598272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.603779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.603802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.603810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.609343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.609365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.609373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.614916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.614937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.614945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.620421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.620443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.620451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.625864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.625887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.625895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.523 [2024-11-20 12:36:51.631690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.523 [2024-11-20 12:36:51.631713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.523 [2024-11-20 12:36:51.631721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.637676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.637698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.637706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.643325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.643347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.643355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.648938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.648965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.654802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.654824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.660307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.660330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.660343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.665658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.665680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.671038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.671059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.671067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.676644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.676667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.676676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.682575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.682597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.682607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.688219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.688243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.688251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.694106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.694130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.694139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.699847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.699871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.699879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.705401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.705423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.705432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.710846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.710872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.710880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.716149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.716171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.716180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.721454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.721476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.721485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.726756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.726778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.726786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.732113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.732135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.732144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.737556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.737578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.737587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.743041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.743063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.743071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.748609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.748631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.748640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.753959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.753980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.753988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.759287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.759310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.759318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.764629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.764651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.764659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.784 [2024-11-20 12:36:51.770047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.784 [2024-11-20 12:36:51.770068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.784 [2024-11-20 12:36:51.770077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.775740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.775761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.775769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.781363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.781385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.781393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.786888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.786911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.786919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.792456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.792477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.792486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.797923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.797958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.803273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.803295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.803307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.808916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.808939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.808952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.814149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.814172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.814180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.819424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.819447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.819454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.824797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.824820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.824828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.830054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.830076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.830084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.835403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.835424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.835432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.840766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.840787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.840795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.845975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.845996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.846004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.851414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.851436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.851445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.856931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.856960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.856969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.862522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.862544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.862553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.868046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.868067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.868075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:08.785 [2024-11-20 12:36:51.873570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.873593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.873601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.785 5391.50 IOPS, 673.94 MiB/s [2024-11-20T11:36:51.901Z] [2024-11-20 12:36:51.880214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc8580) 00:27:08.785 [2024-11-20 12:36:51.880236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.785 [2024-11-20 12:36:51.880244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:08.785 00:27:08.785 Latency(us) 00:27:08.785 [2024-11-20T11:36:51.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:08.785 nvme0n1 : 2.00 5389.88 673.74 0.00 0.00 2965.20 626.87 12537.32 00:27:08.785 [2024-11-20T11:36:51.901Z] =================================================================================================================== 00:27:08.785 [2024-11-20T11:36:51.901Z] Total : 5389.88 673.74 0.00 0.00 2965.20 626.87 12537.32 00:27:08.785 { 00:27:08.785 "results": [ 00:27:08.785 { 00:27:08.785 "job": "nvme0n1", 00:27:08.785 "core_mask": "0x2", 00:27:08.785 "workload": "randread", 00:27:08.785 "status": "finished", 00:27:08.785 "queue_depth": 16, 00:27:08.785 "io_size": 131072, 00:27:08.785 "runtime": 2.003568, 00:27:08.785 "iops": 5389.884446148072, 00:27:08.785 "mibps": 673.735555768509, 00:27:08.785 "io_failed": 0, 00:27:08.785 "io_timeout": 0, 00:27:08.785 "avg_latency_us": 2965.1992781940357, 00:27:08.785 "min_latency_us": 626.8660869565217, 00:27:08.785 "max_latency_us": 12537.321739130435 00:27:08.785 } 00:27:08.785 ], 00:27:08.785 "core_count": 1 00:27:08.785 } 00:27:09.045 12:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:09.045 12:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:09.045 12:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:09.045 | .driver_specific 00:27:09.045 | .nvme_error 00:27:09.045 | .status_code 00:27:09.045 | .command_transient_transport_error' 00:27:09.045 12:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 349 > 0 )) 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 590481 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 590481 ']' 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 590481 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.045 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590481 00:27:09.304 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:09.304 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:09.304 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590481' 00:27:09.304 killing process with pid 590481 00:27:09.304 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 590481 00:27:09.304 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.304 00:27:09.304 Latency(us) 00:27:09.304 [2024-11-20T11:36:52.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.304 [2024-11-20T11:36:52.420Z] =================================================================================================================== 00:27:09.304 [2024-11-20T11:36:52.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 590481 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=591062 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 591062 /var/tmp/bperf.sock 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 591062 ']' 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.305 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.305 [2024-11-20 12:36:52.374460] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:09.305 [2024-11-20 12:36:52.374511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591062 ] 00:27:09.565 [2024-11-20 12:36:52.449330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.565 [2024-11-20 12:36:52.491904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.565 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.565 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:09.565 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.565 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.824 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:09.824 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.824 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.824 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.824 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.824 12:36:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.084 nvme0n1 00:27:10.084 12:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:10.084 12:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.084 12:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.344 12:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.344 12:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:10.344 12:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.344 Running I/O for 2 seconds... 00:27:10.344 [2024-11-20 12:36:53.305041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6738 00:27:10.344 [2024-11-20 12:36:53.305674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.305705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.316114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f92c0 00:27:10.344 [2024-11-20 12:36:53.317381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.317404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.325082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e95a0 00:27:10.344 [2024-11-20 12:36:53.326334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.326354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.334808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2d80 00:27:10.344 [2024-11-20 12:36:53.336183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.336203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.344520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166eff18 00:27:10.344 [2024-11-20 12:36:53.346011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.346031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.351027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb048 00:27:10.344 [2024-11-20 12:36:53.351704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.351723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.360242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb480 00:27:10.344 [2024-11-20 12:36:53.361014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.361035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.369916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0630 00:27:10.344 [2024-11-20 12:36:53.370811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.370830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.380226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df988 00:27:10.344 [2024-11-20 12:36:53.381164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.381185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.388879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e23b8 00:27:10.344 [2024-11-20 12:36:53.389894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.389914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.398567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5a90 00:27:10.344 [2024-11-20 12:36:53.399722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.399744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.408252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166de8a8 00:27:10.344 [2024-11-20 12:36:53.409523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.409543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.417958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fe2e8 00:27:10.344 [2024-11-20 12:36:53.419334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.419354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.427622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0a68 00:27:10.344 [2024-11-20 12:36:53.429104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.429123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.434116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2948 00:27:10.344 [2024-11-20 12:36:53.434792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.434812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.442877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fd208 00:27:10.344 [2024-11-20 12:36:53.443522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.443549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:10.344 [2024-11-20 12:36:53.452260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f0788 00:27:10.344 [2024-11-20 12:36:53.452906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.344 [2024-11-20 12:36:53.452925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.463569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166edd58 00:27:10.605 [2024-11-20 12:36:53.464618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.464639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.473273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e73e0 00:27:10.605 [2024-11-20 12:36:53.474416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.474436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.482051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f6020 00:27:10.605 [2024-11-20 12:36:53.483164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.483187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.491704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e88f8 00:27:10.605 [2024-11-20 12:36:53.492965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.492985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.500256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e95a0 00:27:10.605 [2024-11-20 12:36:53.501070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.501089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.509664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f4298 00:27:10.605 [2024-11-20 12:36:53.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.510752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.519538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1f80 00:27:10.605 [2024-11-20 12:36:53.520798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.520818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.528901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f0bc0 00:27:10.605 [2024-11-20 12:36:53.530070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.530091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.536546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5658 00:27:10.605 [2024-11-20 12:36:53.537346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.537365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.545717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ecc78 00:27:10.605 [2024-11-20 12:36:53.546511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.546531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.554903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df118 00:27:10.605 [2024-11-20 12:36:53.555476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.555495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.564351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7538 00:27:10.605 [2024-11-20 12:36:53.564907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.564928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.574863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e9168 00:27:10.605 [2024-11-20 12:36:53.575814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.575834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.584264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ec408 00:27:10.605 [2024-11-20 12:36:53.585342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.585362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.594904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fe720 00:27:10.605 [2024-11-20 12:36:53.596328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.596349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.601584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ecc78 00:27:10.605 [2024-11-20 12:36:53.602229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.602249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.611274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb048 00:27:10.605 [2024-11-20 12:36:53.612056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.612076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.620943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f81e0 00:27:10.605 [2024-11-20 12:36:53.621775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.631937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f81e0 00:27:10.605 [2024-11-20 12:36:53.633289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.605 [2024-11-20 12:36:53.633309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.605 [2024-11-20 12:36:53.641314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e88f8 00:27:10.606 [2024-11-20 12:36:53.642712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.642735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.647886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f4f40 00:27:10.606 [2024-11-20 12:36:53.648539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.648558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.660119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1710 00:27:10.606 [2024-11-20 12:36:53.661528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.661549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.666865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7818 00:27:10.606 [2024-11-20 12:36:53.667535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.667554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.676255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fc998 00:27:10.606 [2024-11-20 12:36:53.676915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.676935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.687258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1f80 00:27:10.606 [2024-11-20 12:36:53.688398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.688418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.694637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0ea0 00:27:10.606 [2024-11-20 12:36:53.695283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.695303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.704688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ed920 00:27:10.606 [2024-11-20 12:36:53.705142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.705163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:10.606 [2024-11-20 12:36:53.715213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2948 00:27:10.606 [2024-11-20 12:36:53.716390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.606 [2024-11-20 12:36:53.716410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.723197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e49b0 00:27:10.866 [2024-11-20 12:36:53.723901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.723926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.733685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2d80 00:27:10.866 [2024-11-20 12:36:53.734751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.734770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.741046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0a68 00:27:10.866 [2024-11-20 12:36:53.741681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.741701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.752446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0a68 00:27:10.866 [2024-11-20 12:36:53.753508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.753528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.762139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5658 00:27:10.866 [2024-11-20 12:36:53.763305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.763326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.769511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0a68 00:27:10.866 [2024-11-20 12:36:53.770196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.770217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.779554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166de470 00:27:10.866 [2024-11-20 12:36:53.780102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.780121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.788831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6738 00:27:10.866 [2024-11-20 12:36:53.789743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.789763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.797575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fac10 00:27:10.866 [2024-11-20 12:36:53.798408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.798427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.807270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0a68 00:27:10.866 [2024-11-20 12:36:53.808273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.808293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.817853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166dfdc0 00:27:10.866 [2024-11-20 12:36:53.819002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.819021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.826458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166dfdc0 00:27:10.866 [2024-11-20 12:36:53.827533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.827553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.835126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fc128 00:27:10.866 [2024-11-20 12:36:53.835882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.835901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.844536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ec408 00:27:10.866 [2024-11-20 12:36:53.845073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.845093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.855168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f9f68 00:27:10.866 [2024-11-20 12:36:53.856359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.856380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.864375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ebfd0 00:27:10.866 [2024-11-20 12:36:53.865696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.865717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.871736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb8b8 00:27:10.866 [2024-11-20 12:36:53.872562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.872581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.883933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166eea00 00:27:10.866 [2024-11-20 12:36:53.885505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.885524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.890450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fc560 00:27:10.866 [2024-11-20 12:36:53.891206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.891228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.899242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df118 00:27:10.866 [2024-11-20 12:36:53.899897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.866 [2024-11-20 12:36:53.899917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:10.866 [2024-11-20 12:36:53.909977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ea680 00:27:10.866 [2024-11-20 12:36:53.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.910875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.918855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1710 00:27:10.867 [2024-11-20 12:36:53.919727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.919747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.929869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1710 00:27:10.867 [2024-11-20 12:36:53.931216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.931236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.937865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166de470 00:27:10.867 [2024-11-20 12:36:53.938728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.938749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.947048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df118 00:27:10.867 [2024-11-20 12:36:53.947897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.947917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.956345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:10.867 [2024-11-20 12:36:53.957208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.957226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.965453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:10.867 [2024-11-20 12:36:53.966403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.966425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.867 [2024-11-20 12:36:53.974653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:10.867 [2024-11-20 12:36:53.975616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.867 [2024-11-20 12:36:53.975635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:53.984089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:53.985047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:53.985066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:53.993377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:53.994354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:53.994373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.002570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:54.003533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.003551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.011752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:54.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.012706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.020933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:54.021888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.021907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.030116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:54.031055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.031073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.039310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.127 [2024-11-20 12:36:54.040260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.040278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.048818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fdeb0 00:27:11.127 [2024-11-20 12:36:54.049860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.049879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.059353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e84c0 00:27:11.127 [2024-11-20 12:36:54.060876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.060895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.066058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f35f0 00:27:11.127 [2024-11-20 12:36:54.066768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.066788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.075502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f46d0 00:27:11.127 [2024-11-20 12:36:54.076224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.076244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.085058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f9b30 00:27:11.127 [2024-11-20 12:36:54.085857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.085877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.093798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7538 00:27:11.127 [2024-11-20 12:36:54.094493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.094513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.104866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7818 00:27:11.127 [2024-11-20 12:36:54.106024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.106043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.113607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ed0b0 00:27:11.127 [2024-11-20 12:36:54.114754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.114772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.122207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2510 00:27:11.127 [2024-11-20 12:36:54.123023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.123042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.131343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7970 00:27:11.127 [2024-11-20 12:36:54.132171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.132190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.140584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ea680 00:27:11.127 [2024-11-20 12:36:54.141421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.141440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.151027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7818 00:27:11.127 [2024-11-20 12:36:54.152291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.152311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.159603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fe2e8 00:27:11.127 [2024-11-20 12:36:54.160536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.160556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.168944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1f80 00:27:11.127 [2024-11-20 12:36:54.169766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.127 [2024-11-20 12:36:54.169786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.127 [2024-11-20 12:36:54.177659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e49b0 00:27:11.128 [2024-11-20 12:36:54.178925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.178943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.128 [2024-11-20 12:36:54.185567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166dfdc0 00:27:11.128 [2024-11-20 12:36:54.186262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.186282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.128 [2024-11-20 12:36:54.195286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5a90 00:27:11.128 [2024-11-20 12:36:54.196071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.196090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.128 [2024-11-20 12:36:54.204924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f1868 00:27:11.128 [2024-11-20 12:36:54.205847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.205869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.128 [2024-11-20 12:36:54.214598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6fa8 00:27:11.128 [2024-11-20 12:36:54.215669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.215689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.128 [2024-11-20 12:36:54.224061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1710 00:27:11.128 [2024-11-20 12:36:54.224672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.224692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.128 [2024-11-20 12:36:54.232952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f5378 00:27:11.128 [2024-11-20 12:36:54.233907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.128 [2024-11-20 12:36:54.233926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.387 [2024-11-20 12:36:54.242374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ec408 00:27:11.387 [2024-11-20 12:36:54.243213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.243234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.251906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fcdd0 00:27:11.388 [2024-11-20 12:36:54.252741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.252760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.262328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f4f40 00:27:11.388 [2024-11-20 12:36:54.263603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.263623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.270914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f96f8 00:27:11.388 [2024-11-20 12:36:54.271848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.271867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.280019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166efae0 00:27:11.388 [2024-11-20 12:36:54.280959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.280979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.289505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e2c28 00:27:11.388 [2024-11-20 12:36:54.290544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.290564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.388 27393.00 IOPS, 107.00 MiB/s [2024-11-20T11:36:54.504Z] [2024-11-20 12:36:54.299001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6b70 00:27:11.388 [2024-11-20 12:36:54.300054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.300074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.308543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df988 00:27:11.388 [2024-11-20 12:36:54.309729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.309749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.317275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fc998 00:27:11.388 [2024-11-20 12:36:54.318287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.318306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.326623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f8e88 00:27:11.388 [2024-11-20 12:36:54.327558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.327578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.335997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ee190 00:27:11.388 [2024-11-20 12:36:54.336930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.337061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.345246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ed4e8 00:27:11.388 [2024-11-20 12:36:54.346181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.346200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.354490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e27f0 00:27:11.388 [2024-11-20 12:36:54.355439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.355457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.364045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7da8 00:27:11.388 [2024-11-20 12:36:54.365100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.365120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.372677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e01f8 00:27:11.388 [2024-11-20 12:36:54.373589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.373609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.382173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5220 00:27:11.388 [2024-11-20 12:36:54.383090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.383110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.391663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df118 00:27:11.388 [2024-11-20 12:36:54.392694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.400458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166eee38 00:27:11.388 [2024-11-20 12:36:54.401477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.401496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.410117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f8e88 00:27:11.388 [2024-11-20 12:36:54.411253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.411272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.419775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb8b8 00:27:11.388 [2024-11-20 12:36:54.421031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.421051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.429429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e12d8 00:27:11.388 [2024-11-20 12:36:54.430832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.430851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.436141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e9e10 00:27:11.388 [2024-11-20 12:36:54.436795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.436814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.445796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f8e88 00:27:11.388 [2024-11-20 12:36:54.446613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.446637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.455230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3e60 00:27:11.388 [2024-11-20 12:36:54.456012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.456032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.466109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6738 00:27:11.388 [2024-11-20 12:36:54.467266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.467286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.474883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166de470 00:27:11.388 [2024-11-20 12:36:54.476023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.476043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.484541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fbcf0 00:27:11.388 [2024-11-20 12:36:54.485797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.388 [2024-11-20 12:36:54.485817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.388 [2024-11-20 12:36:54.494229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb8b8 00:27:11.389 [2024-11-20 12:36:54.495633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.389 [2024-11-20 12:36:54.495653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.503813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166edd58 00:27:11.649 [2024-11-20 12:36:54.505252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.505272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.510280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6b70 00:27:11.649 [2024-11-20 12:36:54.510961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.510980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.519940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7818 00:27:11.649 [2024-11-20 12:36:54.520738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.520756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.529604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e99d8 00:27:11.649 [2024-11-20 12:36:54.530517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.530536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.539267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7c50 00:27:11.649 [2024-11-20 12:36:54.540295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.540314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.548925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f20d8 00:27:11.649 [2024-11-20 12:36:54.550096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.550115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.558644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7818 00:27:11.649 [2024-11-20 12:36:54.559910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.559929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.567389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166feb58 00:27:11.649 [2024-11-20 12:36:54.568351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.568371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.575935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb8b8 00:27:11.649 [2024-11-20 12:36:54.576843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.576861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.585612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb480 00:27:11.649 [2024-11-20 12:36:54.586627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.586649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.595280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df550 00:27:11.649 [2024-11-20 12:36:54.596409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.596428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.604956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f5378 00:27:11.649 [2024-11-20 12:36:54.606212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.606231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.614624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2948 00:27:11.649 [2024-11-20 12:36:54.615991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.616010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.623978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e8d30 00:27:11.649 [2024-11-20 12:36:54.625368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.625387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.649 [2024-11-20 12:36:54.631968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166de8a8 00:27:11.649 [2024-11-20 12:36:54.632903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.649 [2024-11-20 12:36:54.632922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.640440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e49b0 00:27:11.650 [2024-11-20 12:36:54.641359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.641378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.650138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e49b0 00:27:11.650 [2024-11-20 12:36:54.651157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.651177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.659818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7100 00:27:11.650 [2024-11-20 12:36:54.660994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.661015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.669367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f8e88 00:27:11.650 [2024-11-20 12:36:54.670192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.678025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f2948 00:27:11.650 [2024-11-20 12:36:54.679313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.679333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.687301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166edd58 00:27:11.650 [2024-11-20 12:36:54.688351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.688373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.696685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df118 00:27:11.650 [2024-11-20 12:36:54.697732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.697751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.705815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e9e10 00:27:11.650 [2024-11-20 12:36:54.706544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.714290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f57b0 00:27:11.650 [2024-11-20 12:36:54.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.715109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.723965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6738 00:27:11.650 [2024-11-20 12:36:54.724880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.724899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.733618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6300 00:27:11.650 [2024-11-20 12:36:54.734664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.734683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.743084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f8e88 00:27:11.650 [2024-11-20 12:36:54.743670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.743690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.753888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e3d08 00:27:11.650 [2024-11-20 12:36:54.755232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.755251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.650 [2024-11-20 12:36:54.760579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1b48 00:27:11.650 [2024-11-20 12:36:54.761213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.650 [2024-11-20 12:36:54.761233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.770564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6300 00:27:11.910 [2024-11-20 12:36:54.771180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.771200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.782546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fe2e8 00:27:11.910 [2024-11-20 12:36:54.784004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.784024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.789139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166eaef0 00:27:11.910 [2024-11-20 12:36:54.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.789890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.800161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166eaef0 00:27:11.910 [2024-11-20 12:36:54.801368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.801387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.809315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f3a28 00:27:11.910 [2024-11-20 12:36:54.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.810544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.818765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ff3c8 00:27:11.910 [2024-11-20 12:36:54.819971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.826758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ed0b0 00:27:11.910 [2024-11-20 12:36:54.827480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.827499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.836372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7da8 00:27:11.910 [2024-11-20 12:36:54.837323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.837343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.846912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ef270 00:27:11.910 [2024-11-20 12:36:54.848281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.848300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.855449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e6300 00:27:11.910 [2024-11-20 12:36:54.856391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.856411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.865797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e4140 00:27:11.910 [2024-11-20 12:36:54.867305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.867325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.872287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df550 00:27:11.910 [2024-11-20 12:36:54.872962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.910 [2024-11-20 12:36:54.872981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.910 [2024-11-20 12:36:54.881044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166efae0 00:27:11.910 [2024-11-20 12:36:54.881703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.881722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.890420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f35f0 00:27:11.911 [2024-11-20 12:36:54.891114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.891134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.899455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f0ff8 00:27:11.911 [2024-11-20 12:36:54.900116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.900136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.910910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166feb58 00:27:11.911 [2024-11-20 12:36:54.912178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.912196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.920291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7538 00:27:11.911 [2024-11-20 12:36:54.921546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.921566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.928350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e49b0 00:27:11.911 [2024-11-20 12:36:54.929600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.929623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.938620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df550 00:27:11.911 [2024-11-20 12:36:54.939665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.939684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.946877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e7c50 00:27:11.911 [2024-11-20 12:36:54.947956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.947975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.956136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ef6a8 00:27:11.911 [2024-11-20 12:36:54.956968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.956988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.966665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f20d8 00:27:11.911 [2024-11-20 12:36:54.967945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.967969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.974667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166df550 00:27:11.911 [2024-11-20 12:36:54.975486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.975506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.984162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166eaab8 00:27:11.911 [2024-11-20 12:36:54.984827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.984848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:54.992988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f0bc0 00:27:11.911 [2024-11-20 12:36:54.993579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:54.993600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:55.001761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fd208 00:27:11.911 [2024-11-20 12:36:55.002201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:55.002221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:55.010944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e2c28 00:27:11.911 [2024-11-20 12:36:55.011632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:55.011653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.911 [2024-11-20 12:36:55.020829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e8d30 00:27:11.911 [2024-11-20 12:36:55.021717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.911 [2024-11-20 12:36:55.021738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.030974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7970 00:27:12.171 [2024-11-20 12:36:55.032058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.032078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.039585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ecc78 00:27:12.171 [2024-11-20 12:36:55.040388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.040408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.048319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e99d8 00:27:12.171 [2024-11-20 12:36:55.048983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.057852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e49b0 00:27:12.171 [2024-11-20 12:36:55.058422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.058443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.068300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f7970 00:27:12.171 [2024-11-20 12:36:55.069444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.069464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.078319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fb480 00:27:12.171 [2024-11-20 12:36:55.079528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.079548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.087714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e0a68 00:27:12.171 [2024-11-20 12:36:55.089014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.089034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.096771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fe2e8 00:27:12.171 [2024-11-20 12:36:55.098015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.098034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.106484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166dfdc0 00:27:12.171 [2024-11-20 12:36:55.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.107823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.116165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fd640 00:27:12.171 [2024-11-20 12:36:55.117593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.117612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.124051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166de8a8 00:27:12.171 [2024-11-20 12:36:55.124783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.124802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.132426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f6cc8 00:27:12.171 [2024-11-20 12:36:55.133343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.133362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.141555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fc128 00:27:12.171 [2024-11-20 12:36:55.142267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.142286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.150318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ea248 00:27:12.171 [2024-11-20 12:36:55.150886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.150905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.159639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e95a0 00:27:12.171 [2024-11-20 12:36:55.160214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.160235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.169131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e4578 00:27:12.171 [2024-11-20 12:36:55.169576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.169601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.180683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e1f80 00:27:12.171 [2024-11-20 12:36:55.182103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.182124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.189859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ebb98 00:27:12.171 [2024-11-20 12:36:55.191276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.191295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.196473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f6890 00:27:12.171 [2024-11-20 12:36:55.197161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.197180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.206464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f6890 00:27:12.171 [2024-11-20 12:36:55.207160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.207179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.215931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e4de8 00:27:12.171 [2024-11-20 12:36:55.216494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.216514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.224747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166fd208 00:27:12.171 [2024-11-20 12:36:55.225238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.225258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.234351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f0ff8 00:27:12.171 [2024-11-20 12:36:55.234900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.171 [2024-11-20 12:36:55.234920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.171 [2024-11-20 12:36:55.244057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5220 00:27:12.171 [2024-11-20 12:36:55.244739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.172 [2024-11-20 12:36:55.244759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.172 [2024-11-20 12:36:55.253195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e3d08 00:27:12.172 [2024-11-20 12:36:55.254041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.172 [2024-11-20 12:36:55.254061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.172 [2024-11-20 12:36:55.261935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166f0ff8 00:27:12.172 [2024-11-20 12:36:55.262626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.172 [2024-11-20 12:36:55.262645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.172 [2024-11-20 12:36:55.271231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e3060 00:27:12.172 [2024-11-20 12:36:55.271908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.172 [2024-11-20 12:36:55.271928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.172 [2024-11-20 12:36:55.282792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e23b8 00:27:12.172 [2024-11-20 12:36:55.284361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.172 [2024-11-20 12:36:55.284381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.431 [2024-11-20 12:36:55.289704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166e5220 00:27:12.431 [2024-11-20 12:36:55.290483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.431 [2024-11-20 12:36:55.290502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.431 27464.50 IOPS, 107.28 MiB/s [2024-11-20T11:36:55.547Z] [2024-11-20 12:36:55.302110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab640) with pdu=0x2000166ef6a8 00:27:12.431 [2024-11-20 12:36:55.303168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.431 [2024-11-20 12:36:55.303187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.431 00:27:12.431 Latency(us) 00:27:12.431 [2024-11-20T11:36:55.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.431 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.431 nvme0n1 : 2.01 27475.85 107.33 0.00 0.00 4652.63 1802.24 12708.29 00:27:12.431 [2024-11-20T11:36:55.547Z] =================================================================================================================== 00:27:12.431 [2024-11-20T11:36:55.547Z] Total : 27475.85 107.33 0.00 0.00 4652.63 1802.24 12708.29 00:27:12.431 { 00:27:12.431 "results": [ 00:27:12.431 { 00:27:12.431 "job": "nvme0n1", 00:27:12.431 "core_mask": "0x2", 00:27:12.431 "workload": "randwrite", 00:27:12.431 "status": "finished", 00:27:12.431 "queue_depth": 128, 00:27:12.431 "io_size": 4096, 00:27:12.431 "runtime": 2.006744, 00:27:12.431 "iops": 27475.851428981474, 00:27:12.431 "mibps": 107.32754464445888, 00:27:12.431 "io_failed": 0, 00:27:12.431 "io_timeout": 0, 00:27:12.431 "avg_latency_us": 4652.631539556409, 00:27:12.431 "min_latency_us": 1802.24, 00:27:12.431 "max_latency_us": 12708.285217391305 00:27:12.431 } 00:27:12.431 ], 00:27:12.431 "core_count": 1 00:27:12.431 } 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:12.431 | .driver_specific 00:27:12.431 | .nvme_error 00:27:12.431 | .status_code 00:27:12.431 | .command_transient_transport_error' 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 591062 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 591062 ']' 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 591062 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.431 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591062 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591062' 00:27:12.691 killing process with pid 591062 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 591062 00:27:12.691 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.691 00:27:12.691 Latency(us) 00:27:12.691 [2024-11-20T11:36:55.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.691 [2024-11-20T11:36:55.807Z] =================================================================================================================== 00:27:12.691 [2024-11-20T11:36:55.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 591062 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=591637 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 591637 /var/tmp/bperf.sock 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 591637 ']' 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.691 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.691 [2024-11-20 12:36:55.777092] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:12.691 [2024-11-20 12:36:55.777141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591637 ] 00:27:12.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.691 Zero copy mechanism will not be used. 00:27:12.951 [2024-11-20 12:36:55.852721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.951 [2024-11-20 12:36:55.894913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.951 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.951 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:12.951 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.951 12:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:13.210 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:13.210 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.210 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.210 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.210 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.210 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.470 nvme0n1 00:27:13.470 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:13.470 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.470 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.470 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.470 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:13.470 12:36:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.470 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.470 Zero copy mechanism will not be used. 00:27:13.470 Running I/O for 2 seconds... 00:27:13.470 [2024-11-20 12:36:56.562459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.470 [2024-11-20 12:36:56.562640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.470 [2024-11-20 12:36:56.562669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.470 [2024-11-20 12:36:56.568724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.470 [2024-11-20 12:36:56.568874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.470 [2024-11-20 12:36:56.568896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.470 [2024-11-20 12:36:56.574539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.470 [2024-11-20 12:36:56.574630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.470 [2024-11-20 12:36:56.574652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.470 [2024-11-20 12:36:56.580264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.470 [2024-11-20 12:36:56.580328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.470 [2024-11-20 12:36:56.580348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.585810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.585904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.591286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.591390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.591411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.596573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.596688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.596707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.601866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.602022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.602041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.608177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.608340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.608359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.613699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.613789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.613809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.619784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.619935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.619965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.625743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.625899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.625918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.630520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.630613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.630632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.635728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.635834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.635853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.640858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.641025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.641044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.646011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.646142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.730 [2024-11-20 12:36:56.646161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.730 [2024-11-20 12:36:56.651022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.730 [2024-11-20 12:36:56.651159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.651177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.656127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.656217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.656236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.660871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.660972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.660993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.665817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.665891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.665911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.670300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.670365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.674688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.679448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.679533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.679553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.685151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.685310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.685330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.691626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.691697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.691716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.697913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.698009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.698029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.703923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.704010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.704030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.710470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.710525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.710545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.717524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.717590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.717609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.724101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.724172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.724192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.730030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.730112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.730132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.736017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.736148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.736169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.741822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.741921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.741940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.747393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.747478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.747498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.753545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.753654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.753673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.760488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.760647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.760668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.768190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.768322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.768350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.775596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.775752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.775772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.782236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.782295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.782314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.787332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.787556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.787576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.792401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.792655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.792675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.797002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.797266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.797286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.801501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.801749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.801770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.731 [2024-11-20 12:36:56.805896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.731 [2024-11-20 12:36:56.806171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.731 [2024-11-20 12:36:56.806192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.812196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.732 [2024-11-20 12:36:56.812531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 12:36:56.812551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.818660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.732 [2024-11-20 12:36:56.818902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 12:36:56.818923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.824657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.732 [2024-11-20 12:36:56.824957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 12:36:56.824978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.830093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.732 [2024-11-20 12:36:56.830351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 12:36:56.830372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.834983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.732 [2024-11-20 12:36:56.835231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 12:36:56.835251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.840012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.732 [2024-11-20 12:36:56.840265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.732 [2024-11-20 12:36:56.840285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.732 [2024-11-20 12:36:56.844862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.993 [2024-11-20 12:36:56.845101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.993 [2024-11-20 12:36:56.845122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.993 [2024-11-20 12:36:56.849744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.993 [2024-11-20 12:36:56.850003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.993 [2024-11-20 12:36:56.850024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.993 [2024-11-20 12:36:56.854228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.993 [2024-11-20 12:36:56.854471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.993 [2024-11-20 12:36:56.854491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.858730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.858981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.859000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.863752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.864017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.864037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.868560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.868815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.868834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.872880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.873129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.873149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.877627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.877872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.877893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.882829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.883055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.883075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.887841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.888079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.888100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.892875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.893127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.893147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.897640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.897875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.897895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.902049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.902290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.902314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.906399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.906640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.906659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.910567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.910816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.910836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.914855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.915115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.915134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.919283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.919525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.919545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.923747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.924017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.924037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.928278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.928524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.928544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.932704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.932974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.937216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.937447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.937467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.941667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.941907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.941927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.945839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.946094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.946113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.950195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.950446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.950466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.954603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.954845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.954864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.959889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.960142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.960163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.965031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.965276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.965296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.969486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.969731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.969751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.974068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.994 [2024-11-20 12:36:56.974316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.994 [2024-11-20 12:36:56.974336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.994 [2024-11-20 12:36:56.978470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:56.978717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:56.978737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:56.983656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:56.983905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:56.983926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:56.988426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:56.988706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:56.988726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:56.993072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:56.993302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:56.993323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:56.998494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:56.998730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:56.998750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.003459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.003700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.003720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.008385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.008611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.008631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.013441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.013675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.013694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.018210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.018471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.018491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.022925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.023195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.023220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.027699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.027935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.027960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.032831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.033086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.033106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.038055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.038311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.038330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.043216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.043479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.048099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.048319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.048340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.053048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.053291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.053311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.058192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.058440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.058460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.063420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.063657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.063679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.068564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.068809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.068829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.073373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.073622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.073642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.078455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.078695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.078715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.083530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.083770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.083790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.088683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.088940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.093744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.094026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.099865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.100182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.100203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.995 [2024-11-20 12:36:57.106731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:13.995 [2024-11-20 12:36:57.106966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.995 [2024-11-20 12:36:57.106987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.113483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.113822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.113843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.120380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.120606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.120626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.125437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.125677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.125697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.129769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.130021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.130041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.134027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.134270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.134290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.138314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.138581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.142567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.142809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.142829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.146749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.147008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.147028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.151040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.151282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.151302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.155395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.155644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.155668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.159944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.160202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.160223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.165077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.165326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.165346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.170462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.170703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.170723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.175102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.175359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.175379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.179464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.179713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.179733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.183899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.184147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.184168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.188306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.188554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.188575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.192552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.192795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.192815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.197029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.197283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.197303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.201555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.201801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.201821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.206979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.207229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.207249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.211885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.212121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.212141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.216435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.216696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.220840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.221086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.221107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.225309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.225562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.225581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.229671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.229911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.229931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.233900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.257 [2024-11-20 12:36:57.234144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.257 [2024-11-20 12:36:57.234164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.257 [2024-11-20 12:36:57.238234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.238489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.238509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.242674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.242928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.242953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.247702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.247956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.247976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.252987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.253222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.253242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.257905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.258157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.258179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.262622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.262864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.262884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.267524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.267758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.267779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.272073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.272320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.272341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.276822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.277072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.277096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.282151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.282383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.282404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.287216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.287453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.287474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.291786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.292045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.292066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.296518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.296762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.296782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.301441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.301684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.301704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.306402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.306655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.306675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.310857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.311111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.311131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.315886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.316148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.316168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.321575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.321812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.321832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.326633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.326874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.326894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.331140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.331383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.331403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.335446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.335696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.335715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.339993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.340237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.340257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.344462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.344702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.344722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.349123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.349370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.349389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.353630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.353877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.353898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.357846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.358129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.358151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.258 [2024-11-20 12:36:57.362294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.258 [2024-11-20 12:36:57.362539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.258 [2024-11-20 12:36:57.362560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.259 [2024-11-20 12:36:57.366870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.259 [2024-11-20 12:36:57.367136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.259 [2024-11-20 12:36:57.367156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.372269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.372523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.372543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.377221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.377454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.377475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.381701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.381961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.381982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.386149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.386398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.386418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.390604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.390853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.390874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.395042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.395279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.395300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.399460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.399704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.399729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.403969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.519 [2024-11-20 12:36:57.404211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.519 [2024-11-20 12:36:57.404231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.519 [2024-11-20 12:36:57.408374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.408616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.408637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.412870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.413130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.413151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.417250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.417511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.417532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.421587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.421833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.421853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.425981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.426218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.426238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.430183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.430430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.430451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.434693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.434954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.434974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.439340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.439583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.439603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.444372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.444592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.444612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.449365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.449615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.449635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.453871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.454123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.454144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.458356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.458614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.458634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.462762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.463025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.463046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.467220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.467455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.467476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.471538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.471783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.471803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.475666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.475915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.475935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.480105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.480353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.480374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.484648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.484891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.484911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.489945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.490189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.490209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.494611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.494856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.494876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.499072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.499329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.499349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.503492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.503735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.503755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.507902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.508148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.508168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.512380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.512637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.512657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.516801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.517062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.517085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.521202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.521454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.521475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.525642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.520 [2024-11-20 12:36:57.525894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.520 [2024-11-20 12:36:57.525914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.520 [2024-11-20 12:36:57.530011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.530251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.530271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.534511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.534766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.539409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.539658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.539678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.544611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.544857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.544877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.550765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.551077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.551098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.557808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.558089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.558110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.521 6222.00 IOPS, 777.75 MiB/s [2024-11-20T11:36:57.637Z] [2024-11-20 12:36:57.565171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.565444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.565464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.572413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.572768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.572790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.580015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.580267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.580288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.585828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.586073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.586094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.590961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.591198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.591220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.595681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.595923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.595943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.600215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.600451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.600471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.604625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.604873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.604893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.609585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.609841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.609861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.614767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.615004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.615024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.620201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.620444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.620464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.625327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.625567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.625588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.521 [2024-11-20 12:36:57.630149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.521 [2024-11-20 12:36:57.630396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.521 [2024-11-20 12:36:57.630417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.635205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.635453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.635474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.640125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.640364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.640384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.644850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.645102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.645123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.649465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.649710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.649730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.654588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.654851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.659684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.659927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.659952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.664746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.664987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.665009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.669633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.669875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.669895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.674458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.674696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.674717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.679142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.679392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.679412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.684425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.684904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.684924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.689264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.689511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.689531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.693742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.694000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.694021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.698547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.698784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.698804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.703346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.703589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.703610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.707831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.708082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.708102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.712446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.712692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.712712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.717404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.717630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.717650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.722419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.722670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.722690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.727174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.727414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.727434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.731761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.732015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.732035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.736248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.736479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.736500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.741377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.741659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.741679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.747043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.747356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.747377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.752913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.753201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.753222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.782 [2024-11-20 12:36:57.758507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.782 [2024-11-20 12:36:57.758787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.782 [2024-11-20 12:36:57.758807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.764470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.764753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.764774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.770627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.770942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.770969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.775622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.775838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.775859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.780230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.780452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.780472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.784792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.784997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.789378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.789595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.789616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.794027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.794236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.794257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.799214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.799434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.799454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.804813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.805049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.805070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.810938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.811167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.811188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.815737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.815980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.816000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.820137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.820628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.824841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.825056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.825076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.829054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.829259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.829279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.833285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.833506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.833527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.837611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.837849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.841834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.842062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.842083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.846202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.846402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.846423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.850501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.850707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.850727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.854778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.855002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.855022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.859071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.859278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.859299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.863409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.863633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.863654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.867752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.867971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.867992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.871940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.872167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.872188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.876388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.876612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.876633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.880563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.880766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.880787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.884740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.783 [2024-11-20 12:36:57.884937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.783 [2024-11-20 12:36:57.884965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.783 [2024-11-20 12:36:57.888830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.784 [2024-11-20 12:36:57.889039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.784 [2024-11-20 12:36:57.889059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.784 [2024-11-20 12:36:57.892784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:14.784 [2024-11-20 12:36:57.892986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.784 [2024-11-20 12:36:57.893006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.896818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.897019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.897038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.900827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.901039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.901063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.904777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.904989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.905008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.908713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.908908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.908928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.912605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.912795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.912814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.916499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.916689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.916708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.920385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.920581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.920602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.924265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.924457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.928207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.928406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.928425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.932608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.932819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.932839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.937803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.938035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.938056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.942795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.943069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.943089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.947897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.948151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.948171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.952976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.953243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.958009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.958181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.958201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.963138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.963384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.963405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.968221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.968464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.968484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.973398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.973679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.973700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.978447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.978622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.978643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.983543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.044 [2024-11-20 12:36:57.983842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.044 [2024-11-20 12:36:57.983865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.044 [2024-11-20 12:36:57.988998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:57.989237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:57.989257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:57.994332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:57.994589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:57.994610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:57.999525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:57.999732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:57.999751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.003579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.003748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.003768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.007797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.007961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.007980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.012081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.012284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.012304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.016349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.016494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.016512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.020489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.020647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.020669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.025021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.025217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.025236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.029496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.029672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.029691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.034339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.034512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.034531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.040095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.040243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.040263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.044957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.045117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.045137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.049897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.050037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.050057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.054549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.054687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.054706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.059135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.059296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.064320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.064447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.064468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.069127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.069291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.069310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.073447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.073592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.073611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.077955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.078086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.082504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.082659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.082680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.087557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.087624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.087643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.092419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.092618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.096982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.097147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.097166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.101666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.101818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.101838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.106183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.106353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.106373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.110813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.045 [2024-11-20 12:36:58.110968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.045 [2024-11-20 12:36:58.110988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.045 [2024-11-20 12:36:58.115639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.115778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.115797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.120251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.120397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.120416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.124844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.125025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.125044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.129341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.129490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.129509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.133853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.133936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.133961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.138288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.138449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.138470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.142501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.142678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.142702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.147145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.147311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.147330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.152055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.152184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.152203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.046 [2024-11-20 12:36:58.157532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.046 [2024-11-20 12:36:58.157849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.046 [2024-11-20 12:36:58.157870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.163945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.164093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.164113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.170437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.170616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.170636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.176984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.177182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.177202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.183715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.183867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.183887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.190524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.190745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.190766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.197303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.197520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.197541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.203691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.203860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.203881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.210138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.306 [2024-11-20 12:36:58.210320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.306 [2024-11-20 12:36:58.210339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.306 [2024-11-20 12:36:58.216621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.216805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.216824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.223458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.223758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.223780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.230204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.230504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.230525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.236847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.237110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.243581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.243880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.243902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.249848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.250036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.250056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.254597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.254785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.254806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.259199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.259386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.259405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.263250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.263434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.263454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.267148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.267334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.267354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.270844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.271037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.271056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.274866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.275054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.275074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.278872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.279066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.279086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.282941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.283134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.283154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.286971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.287161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.287187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.290899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.291106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.291126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.294757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.294959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.294980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.299196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.299381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.299401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.303752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.303945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.303971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.307859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.308047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.308067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.312296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.307 [2024-11-20 12:36:58.312478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.307 [2024-11-20 12:36:58.312497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.307 [2024-11-20 12:36:58.317363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.317546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.317565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.321491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.321677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.321696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.325446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.325640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.325659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.329299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.329482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.329503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.333308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.333494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.333513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.337305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.337488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.337508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.341340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.341525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.341545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.345260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.345445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.345464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.349074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.349262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.349282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.353042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.353230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.353250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.357744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.357952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.362192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.362393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.362414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.366220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.366403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.366423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.370266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.370456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.370476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.374289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.374474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.374495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.378209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.378394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.378413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.382264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.382451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.382470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.386318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.386509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.386531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.391136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.391317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.391337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.395765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.395954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.395978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.399978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.400166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.400186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.404025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.404211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.404230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.408006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.408186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.408205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.308 [2024-11-20 12:36:58.412041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.308 [2024-11-20 12:36:58.412225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.308 [2024-11-20 12:36:58.412244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.309 [2024-11-20 12:36:58.416100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.309 [2024-11-20 12:36:58.416288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.309 [2024-11-20 12:36:58.416308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.309 [2024-11-20 12:36:58.420113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.309 [2024-11-20 12:36:58.420301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.309 [2024-11-20 12:36:58.420322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.713 [2024-11-20 12:36:58.424283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.713 [2024-11-20 12:36:58.424487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.713 [2024-11-20 12:36:58.424511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.713 [2024-11-20 12:36:58.428380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.713 [2024-11-20 12:36:58.428567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.713 [2024-11-20 12:36:58.428587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.713 [2024-11-20 12:36:58.432411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.713 [2024-11-20 12:36:58.432606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.713 [2024-11-20 12:36:58.432628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.713 [2024-11-20 12:36:58.436458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.713 [2024-11-20 12:36:58.436647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.713 [2024-11-20 12:36:58.436668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.713 [2024-11-20 12:36:58.440795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.713 [2024-11-20 12:36:58.441023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.441045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.445676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.445984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.446007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.451656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.451879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.451900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.457364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.457666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.457688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.463337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.463640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.463662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.470104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.470293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.470313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.476724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.476898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.476917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.482990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.483146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.483165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.489688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.489855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.489874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.496729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.496893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.496912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.503415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.503569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.503588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.510255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.510405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.510425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.517526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.517653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.524428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.524553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.524572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.531388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.531587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.537561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.537677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.537701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.545086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.545184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.545205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.551035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.551117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.551137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.556577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.556672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.556691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.714 [2024-11-20 12:36:58.562196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.562315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.562335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:15.714 6272.50 IOPS, 784.06 MiB/s [2024-11-20T11:36:58.830Z] [2024-11-20 12:36:58.567208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x24ab980) with pdu=0x2000166ff3c8 00:27:15.714 [2024-11-20 12:36:58.567285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.714 [2024-11-20 12:36:58.567305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:15.714 00:27:15.714 Latency(us) 00:27:15.714 [2024-11-20T11:36:58.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.714 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:15.714 nvme0n1 : 2.00 6272.04 784.01 0.00 0.00 2546.89 1574.29 7636.37 00:27:15.714 [2024-11-20T11:36:58.830Z] =================================================================================================================== 00:27:15.714 [2024-11-20T11:36:58.830Z] Total : 6272.04 784.01 0.00 0.00 2546.89 1574.29 7636.37 00:27:15.714 { 00:27:15.714 "results": [ 00:27:15.714 { 00:27:15.714 "job": "nvme0n1", 00:27:15.714 "core_mask": "0x2", 00:27:15.714 "workload": "randwrite", 00:27:15.714 "status": "finished", 00:27:15.714 "queue_depth": 16, 00:27:15.714 "io_size": 131072, 00:27:15.714 "runtime": 2.003334, 00:27:15.714 "iops": 6272.044501815473, 00:27:15.714 "mibps": 784.0055627269342, 00:27:15.714 "io_failed": 0, 00:27:15.714 "io_timeout": 0, 00:27:15.714 "avg_latency_us": 2546.892077163965, 00:27:15.714 "min_latency_us": 1574.2886956521738, 00:27:15.714 "max_latency_us": 7636.368695652174 00:27:15.714 } 00:27:15.714 ], 00:27:15.714 "core_count": 1 00:27:15.714 } 00:27:15.714 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.714 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.714 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.714 | .driver_specific 00:27:15.714 | .nvme_error 00:27:15.714 | .status_code 00:27:15.714 | .command_transient_transport_error' 00:27:15.714 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 406 > 0 )) 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 591637 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 591637 ']' 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 591637 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591637 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591637' 00:27:15.983 killing process with pid 591637 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 591637 00:27:15.983 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.983 00:27:15.983 Latency(us) 00:27:15.983 [2024-11-20T11:36:59.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.983 [2024-11-20T11:36:59.099Z] =================================================================================================================== 00:27:15.983 [2024-11-20T11:36:59.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.983 12:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 591637 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 589900 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 589900 ']' 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 589900 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589900 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589900' 00:27:15.983 killing process with pid 589900 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 589900 00:27:15.983 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 589900 00:27:16.242 00:27:16.242 real 0m13.984s 00:27:16.242 user 0m26.775s 00:27:16.242 sys 0m4.572s 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.242 ************************************ 00:27:16.242 END TEST nvmf_digest_error 00:27:16.242 ************************************ 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:16.242 rmmod nvme_tcp 00:27:16.242 rmmod nvme_fabrics 00:27:16.242 rmmod nvme_keyring 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 589900 ']' 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 589900 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 589900 ']' 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 589900 00:27:16.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (589900) - No such process 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 589900 is not found' 00:27:16.242 Process with pid 589900 is not found 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.242 12:36:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:18.780 00:27:18.780 real 0m36.408s 00:27:18.780 user 0m55.564s 00:27:18.780 sys 0m13.644s 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:18.780 ************************************ 00:27:18.780 END TEST nvmf_digest 00:27:18.780 ************************************ 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.780 ************************************ 00:27:18.780 START TEST nvmf_bdevperf 00:27:18.780 ************************************ 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:18.780 * Looking for test storage... 00:27:18.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:18.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.780 --rc genhtml_branch_coverage=1 00:27:18.780 --rc genhtml_function_coverage=1 00:27:18.780 --rc genhtml_legend=1 00:27:18.780 --rc geninfo_all_blocks=1 00:27:18.780 --rc geninfo_unexecuted_blocks=1 00:27:18.780 00:27:18.780 ' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:18.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.780 --rc genhtml_branch_coverage=1 00:27:18.780 --rc genhtml_function_coverage=1 00:27:18.780 --rc genhtml_legend=1 00:27:18.780 --rc geninfo_all_blocks=1 00:27:18.780 --rc geninfo_unexecuted_blocks=1 00:27:18.780 00:27:18.780 ' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:18.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.780 --rc genhtml_branch_coverage=1 00:27:18.780 --rc genhtml_function_coverage=1 00:27:18.780 --rc genhtml_legend=1 00:27:18.780 --rc geninfo_all_blocks=1 00:27:18.780 --rc geninfo_unexecuted_blocks=1 00:27:18.780 00:27:18.780 ' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:18.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.780 --rc genhtml_branch_coverage=1 00:27:18.780 --rc genhtml_function_coverage=1 00:27:18.780 --rc genhtml_legend=1 00:27:18.780 --rc geninfo_all_blocks=1 00:27:18.780 --rc geninfo_unexecuted_blocks=1 00:27:18.780 00:27:18.780 ' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.780 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.781 12:37:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.355 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:25.356 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:25.356 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:25.356 Found net devices under 0000:86:00.0: cvl_0_0 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:25.356 Found net devices under 0000:86:00.1: cvl_0_1 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:27:25.356 00:27:25.356 --- 10.0.0.2 ping statistics --- 00:27:25.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.356 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:25.356 00:27:25.356 --- 10.0.0.1 ping statistics --- 00:27:25.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.356 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=595671 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 595671 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 595671 ']' 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.356 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 [2024-11-20 12:37:07.662766] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:25.357 [2024-11-20 12:37:07.662807] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.357 [2024-11-20 12:37:07.743338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:25.357 [2024-11-20 12:37:07.785542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.357 [2024-11-20 12:37:07.785581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.357 [2024-11-20 12:37:07.785589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.357 [2024-11-20 12:37:07.785596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.357 [2024-11-20 12:37:07.785601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.357 [2024-11-20 12:37:07.786906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.357 [2024-11-20 12:37:07.787024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.357 [2024-11-20 12:37:07.787025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 [2024-11-20 12:37:07.930731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 Malloc0 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:25.357 [2024-11-20 12:37:07.992063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.357 12:37:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.357 { 00:27:25.357 "params": { 00:27:25.357 "name": "Nvme$subsystem", 00:27:25.357 "trtype": "$TEST_TRANSPORT", 00:27:25.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.357 "adrfam": "ipv4", 00:27:25.357 "trsvcid": "$NVMF_PORT", 00:27:25.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.357 "hdgst": ${hdgst:-false}, 00:27:25.357 "ddgst": ${ddgst:-false} 00:27:25.357 }, 00:27:25.357 "method": "bdev_nvme_attach_controller" 00:27:25.357 } 00:27:25.357 EOF 00:27:25.357 )") 00:27:25.357 12:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:25.357 12:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:25.357 12:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:25.357 12:37:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:25.357 "params": { 00:27:25.357 "name": "Nvme1", 00:27:25.357 "trtype": "tcp", 00:27:25.357 "traddr": "10.0.0.2", 00:27:25.357 "adrfam": "ipv4", 00:27:25.357 "trsvcid": "4420", 00:27:25.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.357 "hdgst": false, 00:27:25.357 "ddgst": false 00:27:25.357 }, 00:27:25.357 "method": "bdev_nvme_attach_controller" 00:27:25.357 }' 00:27:25.357 [2024-11-20 12:37:08.043314] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:25.357 [2024-11-20 12:37:08.043358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595697 ] 00:27:25.357 [2024-11-20 12:37:08.119124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.357 [2024-11-20 12:37:08.160735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.616 Running I/O for 1 seconds... 00:27:26.558 11077.00 IOPS, 43.27 MiB/s 00:27:26.558 Latency(us) 00:27:26.558 [2024-11-20T11:37:09.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.558 Verification LBA range: start 0x0 length 0x4000 00:27:26.558 Nvme1n1 : 1.01 11138.12 43.51 0.00 0.00 11436.30 1681.14 13278.16 00:27:26.558 [2024-11-20T11:37:09.674Z] =================================================================================================================== 00:27:26.558 [2024-11-20T11:37:09.674Z] Total : 11138.12 43.51 0.00 0.00 11436.30 1681.14 13278.16 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=596046 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:26.558 { 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme$subsystem", 00:27:26.558 "trtype": "$TEST_TRANSPORT", 00:27:26.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "$NVMF_PORT", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.558 "hdgst": ${hdgst:-false}, 00:27:26.558 "ddgst": ${ddgst:-false} 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 } 00:27:26.558 EOF 00:27:26.558 )") 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:26.558 12:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:26.558 "params": { 00:27:26.558 "name": "Nvme1", 00:27:26.558 "trtype": "tcp", 00:27:26.558 "traddr": "10.0.0.2", 00:27:26.558 "adrfam": "ipv4", 00:27:26.558 "trsvcid": "4420", 00:27:26.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:26.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:26.558 "hdgst": false, 00:27:26.558 "ddgst": false 00:27:26.558 }, 00:27:26.558 "method": "bdev_nvme_attach_controller" 00:27:26.558 }' 00:27:26.817 [2024-11-20 12:37:09.694033] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:26.817 [2024-11-20 12:37:09.694083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596046 ] 00:27:26.817 [2024-11-20 12:37:09.767208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.817 [2024-11-20 12:37:09.808314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.076 Running I/O for 15 seconds... 00:27:29.390 11015.00 IOPS, 43.03 MiB/s [2024-11-20T11:37:12.766Z] 11012.50 IOPS, 43.02 MiB/s [2024-11-20T11:37:12.766Z] 12:37:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 595671 00:27:29.650 12:37:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:29.650 [2024-11-20 12:37:12.672203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.650 [2024-11-20 12:37:12.672554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.650 [2024-11-20 12:37:12.672562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.672984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.672994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.651 [2024-11-20 12:37:12.673143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.673164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.673181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.673196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.651 [2024-11-20 12:37:12.673211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.651 [2024-11-20 12:37:12.673220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.652 [2024-11-20 12:37:12.673437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.652 [2024-11-20 12:37:12.673780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.652 [2024-11-20 12:37:12.673788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.673984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.673993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.653 [2024-11-20 12:37:12.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.653 [2024-11-20 12:37:12.674315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.654 [2024-11-20 12:37:12.674414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.674421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117ecf0 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.674430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:29.654 [2024-11-20 12:37:12.674436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:29.654 [2024-11-20 12:37:12.674442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:27:29.654 [2024-11-20 12:37:12.674450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.654 [2024-11-20 12:37:12.677329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.654 [2024-11-20 12:37:12.677384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.677918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 12:37:12.677936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.654 [2024-11-20 12:37:12.677945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.678133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.678312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.654 [2024-11-20 12:37:12.678321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.654 [2024-11-20 12:37:12.678329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.654 [2024-11-20 12:37:12.678337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.654 [2024-11-20 12:37:12.690696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.654 [2024-11-20 12:37:12.691073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 12:37:12.691095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.654 [2024-11-20 12:37:12.691104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.691282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.691462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.654 [2024-11-20 12:37:12.691472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.654 [2024-11-20 12:37:12.691480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.654 [2024-11-20 12:37:12.691488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.654 [2024-11-20 12:37:12.703839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.654 [2024-11-20 12:37:12.704209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 12:37:12.704228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.654 [2024-11-20 12:37:12.704236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.704413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.704592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.654 [2024-11-20 12:37:12.704602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.654 [2024-11-20 12:37:12.704610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.654 [2024-11-20 12:37:12.704617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.654 [2024-11-20 12:37:12.716962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.654 [2024-11-20 12:37:12.717403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 12:37:12.717421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.654 [2024-11-20 12:37:12.717429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.717603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.717777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.654 [2024-11-20 12:37:12.717787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.654 [2024-11-20 12:37:12.717793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.654 [2024-11-20 12:37:12.717801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.654 [2024-11-20 12:37:12.729800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.654 [2024-11-20 12:37:12.730227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 12:37:12.730244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.654 [2024-11-20 12:37:12.730256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.730420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.730585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.654 [2024-11-20 12:37:12.730594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.654 [2024-11-20 12:37:12.730600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.654 [2024-11-20 12:37:12.730607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.654 [2024-11-20 12:37:12.742711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.654 [2024-11-20 12:37:12.743144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.654 [2024-11-20 12:37:12.743190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.654 [2024-11-20 12:37:12.743215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.654 [2024-11-20 12:37:12.743793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.654 [2024-11-20 12:37:12.744323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.654 [2024-11-20 12:37:12.744336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.654 [2024-11-20 12:37:12.744343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.654 [2024-11-20 12:37:12.744350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.655 [2024-11-20 12:37:12.755541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.655 [2024-11-20 12:37:12.755975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.655 [2024-11-20 12:37:12.755993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.655 [2024-11-20 12:37:12.756001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.655 [2024-11-20 12:37:12.756164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.655 [2024-11-20 12:37:12.756327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.655 [2024-11-20 12:37:12.756336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.655 [2024-11-20 12:37:12.756343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.655 [2024-11-20 12:37:12.756349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.914 [2024-11-20 12:37:12.768488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.914 [2024-11-20 12:37:12.768925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.914 [2024-11-20 12:37:12.768987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.914 [2024-11-20 12:37:12.769014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.914 [2024-11-20 12:37:12.769381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.914 [2024-11-20 12:37:12.769549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.914 [2024-11-20 12:37:12.769559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.914 [2024-11-20 12:37:12.769565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.914 [2024-11-20 12:37:12.769572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.914 [2024-11-20 12:37:12.781428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.914 [2024-11-20 12:37:12.781835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.914 [2024-11-20 12:37:12.781852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.914 [2024-11-20 12:37:12.781860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.914 [2024-11-20 12:37:12.782049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.914 [2024-11-20 12:37:12.782223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.914 [2024-11-20 12:37:12.782233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.914 [2024-11-20 12:37:12.782240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.914 [2024-11-20 12:37:12.782248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.914 [2024-11-20 12:37:12.794333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.794755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.794773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.794781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.794945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.795116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.795126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.795132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.795139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.807345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.807832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.807857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.808282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.808457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.808467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.808477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.808485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.820181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.820593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.820646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.820672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.821212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.821387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.821397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.821404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.821411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.832999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.833396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.833414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.833422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.833586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.833749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.833759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.833765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.833772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.845893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.846296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.846313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.846321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.846486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.846649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.846658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.846665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.846672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.858801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.859232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.859271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.859298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.859875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.860464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.860474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.860481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.860487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.871677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.872095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.872112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.872120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.872283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.872446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.872456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.872463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.872469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.884500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.884892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.884909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.915 [2024-11-20 12:37:12.884916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.915 [2024-11-20 12:37:12.885086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.915 [2024-11-20 12:37:12.885251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.915 [2024-11-20 12:37:12.885260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.915 [2024-11-20 12:37:12.885267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.915 [2024-11-20 12:37:12.885273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.915 [2024-11-20 12:37:12.897328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.915 [2024-11-20 12:37:12.897748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.915 [2024-11-20 12:37:12.897792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.897826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.898429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.898603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.898612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.898618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.898625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.910270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.910632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.910678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.910703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.911185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.911359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.911369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.911376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.911382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.923123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.923558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.923575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.923584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.923750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.923914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.923924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.923931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.923937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.936254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.936550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.936591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.936619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.937207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.937742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.937761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.937777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.937790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.951274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.951784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.951840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.951867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.952422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.952667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.952679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.952689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.952699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.964178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.964622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.964639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.964647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.964810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.964978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.964988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.964995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.965002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.977060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.977458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.977475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.977483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.977656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.977829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.977839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.977850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.977857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:12.989959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:12.990300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:12.990347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:12.990374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:12.990968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:12.991550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.916 [2024-11-20 12:37:12.991580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.916 [2024-11-20 12:37:12.991587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.916 [2024-11-20 12:37:12.991594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.916 [2024-11-20 12:37:13.002783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.916 [2024-11-20 12:37:13.003213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.916 [2024-11-20 12:37:13.003258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.916 [2024-11-20 12:37:13.003283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.916 [2024-11-20 12:37:13.003699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.916 [2024-11-20 12:37:13.003865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.917 [2024-11-20 12:37:13.003874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.917 [2024-11-20 12:37:13.003880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.917 [2024-11-20 12:37:13.003887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.917 [2024-11-20 12:37:13.015583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.917 [2024-11-20 12:37:13.016004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.917 [2024-11-20 12:37:13.016021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:29.917 [2024-11-20 12:37:13.016029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:29.917 [2024-11-20 12:37:13.016193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:29.917 [2024-11-20 12:37:13.016356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.917 [2024-11-20 12:37:13.016366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.917 [2024-11-20 12:37:13.016372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.917 [2024-11-20 12:37:13.016378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.917 [2024-11-20 12:37:13.028845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.176 [2024-11-20 12:37:13.029297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.176 [2024-11-20 12:37:13.029315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.176 [2024-11-20 12:37:13.029324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.176 [2024-11-20 12:37:13.029516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.176 [2024-11-20 12:37:13.029703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.176 [2024-11-20 12:37:13.029715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.176 [2024-11-20 12:37:13.029722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.029730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.041723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.042140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.042194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.042220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 [2024-11-20 12:37:13.042761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.042925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.042933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.042939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.042946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.054644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.055062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.055107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.055134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 [2024-11-20 12:37:13.055689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.055854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.055863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.055870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.055876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.067554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.067914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.067970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.068004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 [2024-11-20 12:37:13.068469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.068633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.068643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.068649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.068655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.080449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.080869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.080894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 [2024-11-20 12:37:13.081063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.081227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.081237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.081244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.081250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.093306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.093722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.093739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.093747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 9429.33 IOPS, 36.83 MiB/s [2024-11-20T11:37:13.293Z] [2024-11-20 12:37:13.095201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.095365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.095374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.095380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.095386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.106194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.106610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.106628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.106636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 [2024-11-20 12:37:13.106799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.106973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.106999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.107006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.107014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.118992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.119411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.119457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.119482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.177 [2024-11-20 12:37:13.120043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.177 [2024-11-20 12:37:13.120435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.177 [2024-11-20 12:37:13.120454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.177 [2024-11-20 12:37:13.120469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.177 [2024-11-20 12:37:13.120483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.177 [2024-11-20 12:37:13.133848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.177 [2024-11-20 12:37:13.134298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.177 [2024-11-20 12:37:13.134321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.177 [2024-11-20 12:37:13.134332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.134586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.134840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.134853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.134863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.134874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.146764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.147159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.147204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.147230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.147808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.148186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.148196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.148203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.148215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.159669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.160077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.160123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.160148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.160725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.160889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.160899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.160906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.160912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.174693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.175205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.175251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.175276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.175809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.176073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.176088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.176099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.176110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.187697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.188104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.188133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.188141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.188702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.189253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.189263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.189270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.189277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.200653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.201062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.201080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.201088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.201250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.201414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.201423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.201429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.201436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.213455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.213845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.213862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.213870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.214059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.214238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.214247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.214254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.214261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.226316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.226727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.226744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.178 [2024-11-20 12:37:13.226752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.178 [2024-11-20 12:37:13.226915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.178 [2024-11-20 12:37:13.227085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.178 [2024-11-20 12:37:13.227094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.178 [2024-11-20 12:37:13.227100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.178 [2024-11-20 12:37:13.227107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.178 [2024-11-20 12:37:13.239282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.178 [2024-11-20 12:37:13.239574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.178 [2024-11-20 12:37:13.239593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.179 [2024-11-20 12:37:13.239604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.179 [2024-11-20 12:37:13.239777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.179 [2024-11-20 12:37:13.239958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.179 [2024-11-20 12:37:13.239969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.179 [2024-11-20 12:37:13.239976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.179 [2024-11-20 12:37:13.239983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.179 [2024-11-20 12:37:13.252214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.179 [2024-11-20 12:37:13.252621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.179 [2024-11-20 12:37:13.252639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.179 [2024-11-20 12:37:13.252647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.179 [2024-11-20 12:37:13.252809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.179 [2024-11-20 12:37:13.252980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.179 [2024-11-20 12:37:13.252991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.179 [2024-11-20 12:37:13.252997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.179 [2024-11-20 12:37:13.253004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.179 [2024-11-20 12:37:13.265054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.179 [2024-11-20 12:37:13.265340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.179 [2024-11-20 12:37:13.265357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.179 [2024-11-20 12:37:13.265364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.179 [2024-11-20 12:37:13.265528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.179 [2024-11-20 12:37:13.265693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.179 [2024-11-20 12:37:13.265702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.179 [2024-11-20 12:37:13.265709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.179 [2024-11-20 12:37:13.265715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.179 [2024-11-20 12:37:13.278000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.179 [2024-11-20 12:37:13.278459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.179 [2024-11-20 12:37:13.278502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.179 [2024-11-20 12:37:13.278526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.179 [2024-11-20 12:37:13.279119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.179 [2024-11-20 12:37:13.279325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.179 [2024-11-20 12:37:13.279337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.179 [2024-11-20 12:37:13.279344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.179 [2024-11-20 12:37:13.279350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.179 [2024-11-20 12:37:13.291177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.438 [2024-11-20 12:37:13.291626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.438 [2024-11-20 12:37:13.291649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.438 [2024-11-20 12:37:13.291662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.438 [2024-11-20 12:37:13.291854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.438 [2024-11-20 12:37:13.292049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.438 [2024-11-20 12:37:13.292061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.438 [2024-11-20 12:37:13.292068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.438 [2024-11-20 12:37:13.292075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.438 [2024-11-20 12:37:13.304255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.438 [2024-11-20 12:37:13.304600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.438 [2024-11-20 12:37:13.304618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.438 [2024-11-20 12:37:13.304626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.438 [2024-11-20 12:37:13.304790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.438 [2024-11-20 12:37:13.304960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.438 [2024-11-20 12:37:13.304970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.438 [2024-11-20 12:37:13.304995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.438 [2024-11-20 12:37:13.305003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.438 [2024-11-20 12:37:13.317121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.438 [2024-11-20 12:37:13.317464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.438 [2024-11-20 12:37:13.317481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.438 [2024-11-20 12:37:13.317489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.438 [2024-11-20 12:37:13.317652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.438 [2024-11-20 12:37:13.317816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.317826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.317832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.317842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.330041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.330433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.330450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.330458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.330622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.330785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.330795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.330801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.330808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.342916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.343333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.343371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.343397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.343993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.344322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.344332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.344339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.344346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.355793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.356178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.356224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.356248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.356693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.356858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.356868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.356874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.356880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.368770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.369180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.369227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.369252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.369712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.369878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.369887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.369894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.369900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.381656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.382021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.382040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.382049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.382222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.382396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.382406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.382413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.382419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.394588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.394996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.395014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.395023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.395186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.395349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.395358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.395365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.395372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.407423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.407822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.407839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.407847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.408036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.408210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.408220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.408226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.408234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.420327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.420743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.439 [2024-11-20 12:37:13.420762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.439 [2024-11-20 12:37:13.420770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.439 [2024-11-20 12:37:13.420934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.439 [2024-11-20 12:37:13.421129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.439 [2024-11-20 12:37:13.421139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.439 [2024-11-20 12:37:13.421145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.439 [2024-11-20 12:37:13.421152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.439 [2024-11-20 12:37:13.433213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.439 [2024-11-20 12:37:13.433631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.433649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.433657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.433830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.434012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.434023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.434030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.434037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.446316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.446729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.446774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.446799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.447401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.447576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.447588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.447596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.447604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.459240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.459638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.459655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.459663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.459827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.459995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.460005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.460012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.460019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.472232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.472653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.472698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.472722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.473304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.473694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.473713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.473728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.473743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.487225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.487749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.487797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.487821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.488416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.488941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.488959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.488970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.488984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.500205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.500633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.500650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.500659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.500827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.501018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.501029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.501036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.501043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.512991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.513383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.513400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.513408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.513571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.513734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.513743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.513749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.513756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.525908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.526323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.526341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.526349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.526512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.526675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.526685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.526691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.526697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.538748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.539161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.539182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.539191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.440 [2024-11-20 12:37:13.539354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.440 [2024-11-20 12:37:13.539517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.440 [2024-11-20 12:37:13.539526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.440 [2024-11-20 12:37:13.539532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.440 [2024-11-20 12:37:13.539539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.440 [2024-11-20 12:37:13.551980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.440 [2024-11-20 12:37:13.552433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.440 [2024-11-20 12:37:13.552453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.440 [2024-11-20 12:37:13.552462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.441 [2024-11-20 12:37:13.552656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.441 [2024-11-20 12:37:13.552837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.441 [2024-11-20 12:37:13.552848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.441 [2024-11-20 12:37:13.552855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.441 [2024-11-20 12:37:13.552862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.700 [2024-11-20 12:37:13.564944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.700 [2024-11-20 12:37:13.565292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.700 [2024-11-20 12:37:13.565309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.700 [2024-11-20 12:37:13.565318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.700 [2024-11-20 12:37:13.565480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.700 [2024-11-20 12:37:13.565643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.700 [2024-11-20 12:37:13.565653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.700 [2024-11-20 12:37:13.565659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.700 [2024-11-20 12:37:13.565665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.700 [2024-11-20 12:37:13.577765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.700 [2024-11-20 12:37:13.578174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.700 [2024-11-20 12:37:13.578221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.700 [2024-11-20 12:37:13.578247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.700 [2024-11-20 12:37:13.578834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.700 [2024-11-20 12:37:13.579052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.700 [2024-11-20 12:37:13.579063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.700 [2024-11-20 12:37:13.579070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.700 [2024-11-20 12:37:13.579078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.700 [2024-11-20 12:37:13.590666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.700 [2024-11-20 12:37:13.591099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.700 [2024-11-20 12:37:13.591145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.700 [2024-11-20 12:37:13.591170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.700 [2024-11-20 12:37:13.591749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.700 [2024-11-20 12:37:13.592257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.700 [2024-11-20 12:37:13.592268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.700 [2024-11-20 12:37:13.592275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.700 [2024-11-20 12:37:13.592282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.700 [2024-11-20 12:37:13.603598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.700 [2024-11-20 12:37:13.604011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.700 [2024-11-20 12:37:13.604029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.700 [2024-11-20 12:37:13.604037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.700 [2024-11-20 12:37:13.604200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.700 [2024-11-20 12:37:13.604364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.700 [2024-11-20 12:37:13.604374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.700 [2024-11-20 12:37:13.604380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.700 [2024-11-20 12:37:13.604387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.616468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.616891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.616937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.616977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.617376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.617551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.617564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.617571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.617578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.629316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.629744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.629789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.629813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.630318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.630491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.630500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.630507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.630513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.642160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.642568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.642586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.642594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.642758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.642921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.642930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.642937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.642943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.654990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.655391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.655436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.655461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.656055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.656479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.656490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.656497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.656503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.667891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.668312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.668338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.668501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.668665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.668675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.668682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.668689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.680812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.681210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.681228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.681236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.681400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.681563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.681572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.681578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.681585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.693761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.694145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.694164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.694172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.694337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.694501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.694511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.694518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.694526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.706877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.707314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.707337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.707347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.707525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.707706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.707717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.707724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.707731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.719941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.720383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.720402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.720411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.720588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.720767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.720776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.720783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.720791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.701 [2024-11-20 12:37:13.733135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.701 [2024-11-20 12:37:13.733566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.701 [2024-11-20 12:37:13.733584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.701 [2024-11-20 12:37:13.733593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.701 [2024-11-20 12:37:13.733770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.701 [2024-11-20 12:37:13.733956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.701 [2024-11-20 12:37:13.733967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.701 [2024-11-20 12:37:13.733974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.701 [2024-11-20 12:37:13.733981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.702 [2024-11-20 12:37:13.746485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.702 [2024-11-20 12:37:13.746904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.702 [2024-11-20 12:37:13.746922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.702 [2024-11-20 12:37:13.746931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.702 [2024-11-20 12:37:13.747119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.702 [2024-11-20 12:37:13.747299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.702 [2024-11-20 12:37:13.747309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.702 [2024-11-20 12:37:13.747317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.702 [2024-11-20 12:37:13.747323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.702 [2024-11-20 12:37:13.759673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.702 [2024-11-20 12:37:13.760105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.702 [2024-11-20 12:37:13.760124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.702 [2024-11-20 12:37:13.760133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.702 [2024-11-20 12:37:13.760323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.702 [2024-11-20 12:37:13.760506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.702 [2024-11-20 12:37:13.760516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.702 [2024-11-20 12:37:13.760524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.702 [2024-11-20 12:37:13.760531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.702 [2024-11-20 12:37:13.772860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.702 [2024-11-20 12:37:13.773274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.702 [2024-11-20 12:37:13.773292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.702 [2024-11-20 12:37:13.773301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.702 [2024-11-20 12:37:13.773478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.702 [2024-11-20 12:37:13.773658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.702 [2024-11-20 12:37:13.773669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.702 [2024-11-20 12:37:13.773676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.702 [2024-11-20 12:37:13.773683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.702 [2024-11-20 12:37:13.786053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.702 [2024-11-20 12:37:13.786481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.702 [2024-11-20 12:37:13.786500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.702 [2024-11-20 12:37:13.786509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.702 [2024-11-20 12:37:13.786686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.702 [2024-11-20 12:37:13.786865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.702 [2024-11-20 12:37:13.786876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.702 [2024-11-20 12:37:13.786887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.702 [2024-11-20 12:37:13.786894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.702 [2024-11-20 12:37:13.799252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.702 [2024-11-20 12:37:13.799614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.702 [2024-11-20 12:37:13.799632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.702 [2024-11-20 12:37:13.799641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.702 [2024-11-20 12:37:13.799818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.702 [2024-11-20 12:37:13.800005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.702 [2024-11-20 12:37:13.800017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.702 [2024-11-20 12:37:13.800024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.702 [2024-11-20 12:37:13.800031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.702 [2024-11-20 12:37:13.812394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.702 [2024-11-20 12:37:13.812734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.702 [2024-11-20 12:37:13.812754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.702 [2024-11-20 12:37:13.812765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.702 [2024-11-20 12:37:13.812956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.702 [2024-11-20 12:37:13.813144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.702 [2024-11-20 12:37:13.813156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.702 [2024-11-20 12:37:13.813164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.702 [2024-11-20 12:37:13.813172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.960 [2024-11-20 12:37:13.825518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.960 [2024-11-20 12:37:13.825958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 12:37:13.825977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.960 [2024-11-20 12:37:13.825986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.960 [2024-11-20 12:37:13.826185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.960 [2024-11-20 12:37:13.826371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.960 [2024-11-20 12:37:13.826381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.960 [2024-11-20 12:37:13.826388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.960 [2024-11-20 12:37:13.826395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.960 [2024-11-20 12:37:13.838704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.960 [2024-11-20 12:37:13.839135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 12:37:13.839154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.960 [2024-11-20 12:37:13.839163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.960 [2024-11-20 12:37:13.839341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.960 [2024-11-20 12:37:13.839540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.960 [2024-11-20 12:37:13.839550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.960 [2024-11-20 12:37:13.839557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.960 [2024-11-20 12:37:13.839564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.960 [2024-11-20 12:37:13.851788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.960 [2024-11-20 12:37:13.852199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.960 [2024-11-20 12:37:13.852218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.960 [2024-11-20 12:37:13.852226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.960 [2024-11-20 12:37:13.852402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.960 [2024-11-20 12:37:13.852582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.960 [2024-11-20 12:37:13.852592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.960 [2024-11-20 12:37:13.852599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.960 [2024-11-20 12:37:13.852607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.960 [2024-11-20 12:37:13.864977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.865409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.865427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.865435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.865613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.865793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.865803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.865811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.865818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.878174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.878505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.878526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.878534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.878712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.878891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.878902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.878908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.878916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.891296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.891640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.891658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.891667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.891844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.892031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.892042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.892049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.892056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.904407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.904837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.904854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.904863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.905048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.905232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.905243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.905250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.905257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.917610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.918064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.918082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.918091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.918269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.918452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.918463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.918469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.918476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.930846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.931288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.931306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.931315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.931491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.931670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.931681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.931687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.931695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.944045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.944430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.944448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.944456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.944634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.944813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.944824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.944833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.944841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.957206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.957635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.957653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.957661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.957839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.958026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.958037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.958048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.958056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.970392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.970821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.970839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.970848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.971032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.971212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.971221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.971228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.971235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.961 [2024-11-20 12:37:13.983581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.961 [2024-11-20 12:37:13.984014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.961 [2024-11-20 12:37:13.984033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.961 [2024-11-20 12:37:13.984041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.961 [2024-11-20 12:37:13.984219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.961 [2024-11-20 12:37:13.984399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.961 [2024-11-20 12:37:13.984409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.961 [2024-11-20 12:37:13.984416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.961 [2024-11-20 12:37:13.984423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.962 [2024-11-20 12:37:13.996774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.962 [2024-11-20 12:37:13.997049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 12:37:13.997068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.962 [2024-11-20 12:37:13.997076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.962 [2024-11-20 12:37:13.997253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.962 [2024-11-20 12:37:13.997432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.962 [2024-11-20 12:37:13.997443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.962 [2024-11-20 12:37:13.997450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.962 [2024-11-20 12:37:13.997456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.962 [2024-11-20 12:37:14.009811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.962 [2024-11-20 12:37:14.010179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 12:37:14.010197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.962 [2024-11-20 12:37:14.010206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.962 [2024-11-20 12:37:14.010383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.962 [2024-11-20 12:37:14.010563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.962 [2024-11-20 12:37:14.010573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.962 [2024-11-20 12:37:14.010580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.962 [2024-11-20 12:37:14.010586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.962 [2024-11-20 12:37:14.023007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.962 [2024-11-20 12:37:14.023444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 12:37:14.023463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.962 [2024-11-20 12:37:14.023471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.962 [2024-11-20 12:37:14.023655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.962 [2024-11-20 12:37:14.023845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.962 [2024-11-20 12:37:14.023855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.962 [2024-11-20 12:37:14.023862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.962 [2024-11-20 12:37:14.023869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.962 [2024-11-20 12:37:14.036138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.962 [2024-11-20 12:37:14.036556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 12:37:14.036574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.962 [2024-11-20 12:37:14.036583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.962 [2024-11-20 12:37:14.036766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.962 [2024-11-20 12:37:14.036957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.962 [2024-11-20 12:37:14.036969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.962 [2024-11-20 12:37:14.036978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.962 [2024-11-20 12:37:14.036986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.962 [2024-11-20 12:37:14.049374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.962 [2024-11-20 12:37:14.049724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 12:37:14.049742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.962 [2024-11-20 12:37:14.049754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.962 [2024-11-20 12:37:14.049932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.962 [2024-11-20 12:37:14.050119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.962 [2024-11-20 12:37:14.050130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.962 [2024-11-20 12:37:14.050137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.962 [2024-11-20 12:37:14.050144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.962 [2024-11-20 12:37:14.062568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.962 [2024-11-20 12:37:14.062896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.962 [2024-11-20 12:37:14.062914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:30.962 [2024-11-20 12:37:14.062922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:30.962 [2024-11-20 12:37:14.063114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:30.962 [2024-11-20 12:37:14.063304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.962 [2024-11-20 12:37:14.063315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.962 [2024-11-20 12:37:14.063322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.962 [2024-11-20 12:37:14.063328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.221 [2024-11-20 12:37:14.075859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.221 [2024-11-20 12:37:14.076311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.221 [2024-11-20 12:37:14.076330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.221 [2024-11-20 12:37:14.076339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.221 [2024-11-20 12:37:14.076523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.221 [2024-11-20 12:37:14.076708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.076718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.076725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.076732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.088938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.089326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.089335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.089514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.089697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.089707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.089714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.089721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 7072.00 IOPS, 27.62 MiB/s [2024-11-20T11:37:14.338Z] [2024-11-20 12:37:14.102054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.102483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.102501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.102510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.102688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.102867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.102877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.102884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.102892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.115242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.115680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.115699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.115707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.115885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.116072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.116083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.116090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.116097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.128434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.128879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.128924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.128959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.129482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.129660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.129670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.129681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.129689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.141496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.141847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.141864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.141872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.142050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.142224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.142234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.142240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.142247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.154533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.154945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.155001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.155026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.155564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.155738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.155748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.155755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.155761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.167550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.167965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.167983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.167991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.168154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.168318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.168328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.168334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.168340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.180513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.180925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.180983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.181008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.181487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.181651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.181660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.181666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.181673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.193437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.193849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.193889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.222 [2024-11-20 12:37:14.193915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.222 [2024-11-20 12:37:14.194451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.222 [2024-11-20 12:37:14.194622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.222 [2024-11-20 12:37:14.194632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.222 [2024-11-20 12:37:14.194639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.222 [2024-11-20 12:37:14.194645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.222 [2024-11-20 12:37:14.206546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.222 [2024-11-20 12:37:14.206976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.222 [2024-11-20 12:37:14.207025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.207052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.207499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.207678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.207689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.207697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.207704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.219434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.219788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.219832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.219865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.220378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.220543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.220552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.220559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.220566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.232318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.232668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.232714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.232739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.233334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.233579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.233589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.233595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.233601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.245194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.245612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.245628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.245636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.245798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.245966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.245974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.245981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.245987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.258114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.258527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.258545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.258552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.258715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.258883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.258893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.258900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.258906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.270924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.271316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.271335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.271344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.271508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.271673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.271683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.271689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.271697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.283722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.284051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.284069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.284077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.284240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.284405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.284414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.284421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.284427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.296662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.297056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.297074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.297083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.297246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.297410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.297420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.297430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.297436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.309508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.309903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.309920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.309928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.310119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.310292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.310302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.310309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.310316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.322298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.322710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.322727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.223 [2024-11-20 12:37:14.322735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.223 [2024-11-20 12:37:14.322897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.223 [2024-11-20 12:37:14.323067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.223 [2024-11-20 12:37:14.323077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.223 [2024-11-20 12:37:14.323084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.223 [2024-11-20 12:37:14.323090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.223 [2024-11-20 12:37:14.335522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.223 [2024-11-20 12:37:14.335977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.223 [2024-11-20 12:37:14.335996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.224 [2024-11-20 12:37:14.336006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.224 [2024-11-20 12:37:14.336181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.224 [2024-11-20 12:37:14.336356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.224 [2024-11-20 12:37:14.336366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.224 [2024-11-20 12:37:14.336373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.224 [2024-11-20 12:37:14.336380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.483 [2024-11-20 12:37:14.348317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.483 [2024-11-20 12:37:14.348746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.483 [2024-11-20 12:37:14.348793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.483 [2024-11-20 12:37:14.348819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.483 [2024-11-20 12:37:14.349418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.483 [2024-11-20 12:37:14.349989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.483 [2024-11-20 12:37:14.350000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.483 [2024-11-20 12:37:14.350007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.483 [2024-11-20 12:37:14.350015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.483 [2024-11-20 12:37:14.361225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.483 [2024-11-20 12:37:14.361575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.483 [2024-11-20 12:37:14.361593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.483 [2024-11-20 12:37:14.361601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.483 [2024-11-20 12:37:14.361764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.483 [2024-11-20 12:37:14.361927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.483 [2024-11-20 12:37:14.361936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.483 [2024-11-20 12:37:14.361943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.483 [2024-11-20 12:37:14.361954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.483 [2024-11-20 12:37:14.374096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.483 [2024-11-20 12:37:14.374439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.483 [2024-11-20 12:37:14.374457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.483 [2024-11-20 12:37:14.374467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.483 [2024-11-20 12:37:14.374630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.483 [2024-11-20 12:37:14.374794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.483 [2024-11-20 12:37:14.374803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.483 [2024-11-20 12:37:14.374810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.483 [2024-11-20 12:37:14.374816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.483 [2024-11-20 12:37:14.386978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.483 [2024-11-20 12:37:14.387371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.483 [2024-11-20 12:37:14.387388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.483 [2024-11-20 12:37:14.387400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.483 [2024-11-20 12:37:14.387564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.483 [2024-11-20 12:37:14.387729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.483 [2024-11-20 12:37:14.387738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.483 [2024-11-20 12:37:14.387744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.483 [2024-11-20 12:37:14.387751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.399826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.400259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.400307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.400331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.400893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.401085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.401095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.401102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.401109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.412629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.413044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.413086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.413112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.413693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.414195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.414214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.414222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.414229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.425536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.425902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.425960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.425985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.426565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.427132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.427143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.427150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.427157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.438371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.438689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.438706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.438714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.438878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.439045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.439056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.439062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.439069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.451290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.451732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.451776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.451801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.452285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.452450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.452459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.452467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.452475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.464489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.464921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.464981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.465008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.465587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.466180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.466216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.466230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.466238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.477401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.477806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.477850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.477875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.478315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.478482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.478492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.478499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.478505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.490209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.490547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.490564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.490572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.490735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.490899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.490909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.490915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.490922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.503224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.503582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.503600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.503608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.484 [2024-11-20 12:37:14.503786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.484 [2024-11-20 12:37:14.503958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.484 [2024-11-20 12:37:14.503968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.484 [2024-11-20 12:37:14.503975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.484 [2024-11-20 12:37:14.503982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.484 [2024-11-20 12:37:14.516048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.484 [2024-11-20 12:37:14.516480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.484 [2024-11-20 12:37:14.516524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.484 [2024-11-20 12:37:14.516549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.517142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.517497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.517507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.517513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.517520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.485 [2024-11-20 12:37:14.528999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.485 [2024-11-20 12:37:14.529419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.485 [2024-11-20 12:37:14.529465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.485 [2024-11-20 12:37:14.529490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.530082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.530275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.530285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.530292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.530298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.485 [2024-11-20 12:37:14.541908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.485 [2024-11-20 12:37:14.542331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.485 [2024-11-20 12:37:14.542349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.485 [2024-11-20 12:37:14.542357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.542520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.542685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.542695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.542701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.542708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.485 [2024-11-20 12:37:14.554775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.485 [2024-11-20 12:37:14.555192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.485 [2024-11-20 12:37:14.555209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.485 [2024-11-20 12:37:14.555220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.555383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.555547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.555557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.555563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.555570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.485 [2024-11-20 12:37:14.567577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.485 [2024-11-20 12:37:14.567989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.485 [2024-11-20 12:37:14.568007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.485 [2024-11-20 12:37:14.568015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.568178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.568341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.568351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.568357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.568364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.485 [2024-11-20 12:37:14.580513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.485 [2024-11-20 12:37:14.580929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.485 [2024-11-20 12:37:14.580946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.485 [2024-11-20 12:37:14.580959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.581122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.581285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.581295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.581301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.581308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.485 [2024-11-20 12:37:14.593403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.485 [2024-11-20 12:37:14.593739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.485 [2024-11-20 12:37:14.593757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.485 [2024-11-20 12:37:14.593765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.485 [2024-11-20 12:37:14.593938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.485 [2024-11-20 12:37:14.594142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.485 [2024-11-20 12:37:14.594161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.485 [2024-11-20 12:37:14.594170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.485 [2024-11-20 12:37:14.594179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.745 [2024-11-20 12:37:14.606335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.745 [2024-11-20 12:37:14.606762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.745 [2024-11-20 12:37:14.606779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.745 [2024-11-20 12:37:14.606788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.745 [2024-11-20 12:37:14.606985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.745 [2024-11-20 12:37:14.607151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.745 [2024-11-20 12:37:14.607162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.745 [2024-11-20 12:37:14.607169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.745 [2024-11-20 12:37:14.607175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.745 [2024-11-20 12:37:14.619240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.745 [2024-11-20 12:37:14.619656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.745 [2024-11-20 12:37:14.619703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.745 [2024-11-20 12:37:14.619728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.745 [2024-11-20 12:37:14.620323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.745 [2024-11-20 12:37:14.620786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.745 [2024-11-20 12:37:14.620796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.745 [2024-11-20 12:37:14.620802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.745 [2024-11-20 12:37:14.620809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.745 [2024-11-20 12:37:14.632054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.745 [2024-11-20 12:37:14.632470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.745 [2024-11-20 12:37:14.632517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.745 [2024-11-20 12:37:14.632542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.745 [2024-11-20 12:37:14.633135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.745 [2024-11-20 12:37:14.633680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.745 [2024-11-20 12:37:14.633689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.745 [2024-11-20 12:37:14.633695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.745 [2024-11-20 12:37:14.633706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.745 [2024-11-20 12:37:14.644852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.745 [2024-11-20 12:37:14.645192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.745 [2024-11-20 12:37:14.645210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.745 [2024-11-20 12:37:14.645218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.745 [2024-11-20 12:37:14.645382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.745 [2024-11-20 12:37:14.645547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.745 [2024-11-20 12:37:14.645556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.745 [2024-11-20 12:37:14.645563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.745 [2024-11-20 12:37:14.645569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.745 [2024-11-20 12:37:14.657635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.745 [2024-11-20 12:37:14.657985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.745 [2024-11-20 12:37:14.658003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.745 [2024-11-20 12:37:14.658011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.745 [2024-11-20 12:37:14.658174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.658339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.658349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.658355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.658361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.670433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.670801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.670844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.670869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.671463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.671641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.671651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.671658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.671665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.683361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.683796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.683839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.683862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.684460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.684626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.684635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.684642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.684648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.696259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.696683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.696727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.696752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.697349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.697803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.697812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.697819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.697825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.709234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.709602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.709648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.709672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.710267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.710831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.710841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.710849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.710855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.722283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.722705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.722723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.722732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.722908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.723086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.723097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.723104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.723111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.735431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.735863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.735881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.735889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.736065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.736239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.736249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.736256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.736262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.748303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.748653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.748670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.748679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.748842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.749011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.749021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.749028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.749035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.761094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.761507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.761524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.761532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.761695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.761859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.761872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.761879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.761886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.774027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.774427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.774472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.774497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.746 [2024-11-20 12:37:14.774960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.746 [2024-11-20 12:37:14.775126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.746 [2024-11-20 12:37:14.775136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.746 [2024-11-20 12:37:14.775142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.746 [2024-11-20 12:37:14.775148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.746 [2024-11-20 12:37:14.786847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.746 [2024-11-20 12:37:14.787276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.746 [2024-11-20 12:37:14.787321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.746 [2024-11-20 12:37:14.787345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.747 [2024-11-20 12:37:14.787825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.747 [2024-11-20 12:37:14.787995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.747 [2024-11-20 12:37:14.788005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.747 [2024-11-20 12:37:14.788012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.747 [2024-11-20 12:37:14.788018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.747 [2024-11-20 12:37:14.799639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.747 [2024-11-20 12:37:14.800060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.747 [2024-11-20 12:37:14.800107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.747 [2024-11-20 12:37:14.800132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.747 [2024-11-20 12:37:14.800712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.747 [2024-11-20 12:37:14.800938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.747 [2024-11-20 12:37:14.800953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.747 [2024-11-20 12:37:14.800961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.747 [2024-11-20 12:37:14.800971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.747 [2024-11-20 12:37:14.812578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.747 [2024-11-20 12:37:14.812994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.747 [2024-11-20 12:37:14.813037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.747 [2024-11-20 12:37:14.813063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.747 [2024-11-20 12:37:14.813643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.747 [2024-11-20 12:37:14.814238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.747 [2024-11-20 12:37:14.814263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.747 [2024-11-20 12:37:14.814270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.747 [2024-11-20 12:37:14.814277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.747 [2024-11-20 12:37:14.825424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.747 [2024-11-20 12:37:14.825815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.747 [2024-11-20 12:37:14.825831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.747 [2024-11-20 12:37:14.825840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.747 [2024-11-20 12:37:14.826011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.747 [2024-11-20 12:37:14.826175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.747 [2024-11-20 12:37:14.826185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.747 [2024-11-20 12:37:14.826191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.747 [2024-11-20 12:37:14.826197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.747 [2024-11-20 12:37:14.838209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.747 [2024-11-20 12:37:14.838624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.747 [2024-11-20 12:37:14.838641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.747 [2024-11-20 12:37:14.838649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.747 [2024-11-20 12:37:14.838813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.747 [2024-11-20 12:37:14.838999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.747 [2024-11-20 12:37:14.839009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.747 [2024-11-20 12:37:14.839016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.747 [2024-11-20 12:37:14.839023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.747 [2024-11-20 12:37:14.851191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.747 [2024-11-20 12:37:14.851625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.747 [2024-11-20 12:37:14.851670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:31.747 [2024-11-20 12:37:14.851695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:31.747 [2024-11-20 12:37:14.852206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:31.747 [2024-11-20 12:37:14.852372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.747 [2024-11-20 12:37:14.852381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.747 [2024-11-20 12:37:14.852388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.747 [2024-11-20 12:37:14.852394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.007 [2024-11-20 12:37:14.864194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.007 [2024-11-20 12:37:14.864625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.007 [2024-11-20 12:37:14.864642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.007 [2024-11-20 12:37:14.864650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.007 [2024-11-20 12:37:14.864824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.007 [2024-11-20 12:37:14.865014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.007 [2024-11-20 12:37:14.865026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.007 [2024-11-20 12:37:14.865033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.007 [2024-11-20 12:37:14.865041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.007 [2024-11-20 12:37:14.877138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.007 [2024-11-20 12:37:14.877486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.007 [2024-11-20 12:37:14.877503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.007 [2024-11-20 12:37:14.877511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.007 [2024-11-20 12:37:14.877675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.007 [2024-11-20 12:37:14.877839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.007 [2024-11-20 12:37:14.877848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.007 [2024-11-20 12:37:14.877854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.007 [2024-11-20 12:37:14.877861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.007 [2024-11-20 12:37:14.890028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.007 [2024-11-20 12:37:14.890386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.007 [2024-11-20 12:37:14.890431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.007 [2024-11-20 12:37:14.890456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.007 [2024-11-20 12:37:14.890988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.007 [2024-11-20 12:37:14.891153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.007 [2024-11-20 12:37:14.891163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.007 [2024-11-20 12:37:14.891169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.007 [2024-11-20 12:37:14.891175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.007 [2024-11-20 12:37:14.903022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.007 [2024-11-20 12:37:14.903461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.007 [2024-11-20 12:37:14.903508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.007 [2024-11-20 12:37:14.903532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.007 [2024-11-20 12:37:14.904125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.007 [2024-11-20 12:37:14.904518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.007 [2024-11-20 12:37:14.904527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.007 [2024-11-20 12:37:14.904533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.007 [2024-11-20 12:37:14.904539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.007 [2024-11-20 12:37:14.915848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.007 [2024-11-20 12:37:14.916276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.007 [2024-11-20 12:37:14.916294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.007 [2024-11-20 12:37:14.916302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.007 [2024-11-20 12:37:14.916466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.007 [2024-11-20 12:37:14.916629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.916638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.916645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.916651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:14.928769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:14.929202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:14.929219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:14.929228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:14.929400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:14.929576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.929589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.929595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.929602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:14.941606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:14.942031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:14.942049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:14.942057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:14.942221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:14.942384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.942394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.942401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.942407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:14.954476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:14.954889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:14.954906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:14.954914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:14.955084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:14.955249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.955258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.955264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.955271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:14.967333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:14.967786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:14.967830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:14.967856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:14.968367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:14.968532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.968541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.968547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.968556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:14.980532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:14.980984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:14.981031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:14.981055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:14.981330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:14.981505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.981516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.981522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.981529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:14.993419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:14.993839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:14.993856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:14.993865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:14.994036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:14.994201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:14.994211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:14.994218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:14.994225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:15.006309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:15.006701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:15.006719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:15.006727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:15.006891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:15.007061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:15.007071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:15.007078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:15.007084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:15.019147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:15.019490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:15.019511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:15.019519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:15.019682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:15.019845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:15.019855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:15.019861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:15.019867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:15.031933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:15.032368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:15.032414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:15.032439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.008 [2024-11-20 12:37:15.033033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.008 [2024-11-20 12:37:15.033620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.008 [2024-11-20 12:37:15.033629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.008 [2024-11-20 12:37:15.033636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.008 [2024-11-20 12:37:15.033643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.008 [2024-11-20 12:37:15.044731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.008 [2024-11-20 12:37:15.045152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.008 [2024-11-20 12:37:15.045169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.008 [2024-11-20 12:37:15.045176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.009 [2024-11-20 12:37:15.045339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.009 [2024-11-20 12:37:15.045503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.009 [2024-11-20 12:37:15.045512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.009 [2024-11-20 12:37:15.045519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.009 [2024-11-20 12:37:15.045526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.009 [2024-11-20 12:37:15.057586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.009 [2024-11-20 12:37:15.057981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.009 [2024-11-20 12:37:15.057999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.009 [2024-11-20 12:37:15.058007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.009 [2024-11-20 12:37:15.058173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.009 [2024-11-20 12:37:15.058338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.009 [2024-11-20 12:37:15.058347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.009 [2024-11-20 12:37:15.058354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.009 [2024-11-20 12:37:15.058360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.009 [2024-11-20 12:37:15.070516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.009 [2024-11-20 12:37:15.070933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.009 [2024-11-20 12:37:15.070984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.009 [2024-11-20 12:37:15.071012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.009 [2024-11-20 12:37:15.071555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.009 [2024-11-20 12:37:15.071720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.009 [2024-11-20 12:37:15.071729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.009 [2024-11-20 12:37:15.071735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.009 [2024-11-20 12:37:15.071741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.009 [2024-11-20 12:37:15.083448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.009 [2024-11-20 12:37:15.083876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.009 [2024-11-20 12:37:15.083921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.009 [2024-11-20 12:37:15.083946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.009 [2024-11-20 12:37:15.084446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.009 [2024-11-20 12:37:15.084614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.009 [2024-11-20 12:37:15.084625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.009 [2024-11-20 12:37:15.084631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.009 [2024-11-20 12:37:15.084638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.009 [2024-11-20 12:37:15.096245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.009 [2024-11-20 12:37:15.096660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.009 [2024-11-20 12:37:15.096677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.009 [2024-11-20 12:37:15.096685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.009 [2024-11-20 12:37:15.096848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.009 [2024-11-20 12:37:15.097018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.009 [2024-11-20 12:37:15.097033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.009 [2024-11-20 12:37:15.097040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.009 [2024-11-20 12:37:15.097046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.009 5657.60 IOPS, 22.10 MiB/s [2024-11-20T11:37:15.125Z] [2024-11-20 12:37:15.109077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.009 [2024-11-20 12:37:15.109416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.009 [2024-11-20 12:37:15.109433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.009 [2024-11-20 12:37:15.109441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.009 [2024-11-20 12:37:15.109604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.009 [2024-11-20 12:37:15.109768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.009 [2024-11-20 12:37:15.109778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.009 [2024-11-20 12:37:15.109784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.009 [2024-11-20 12:37:15.109791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.122224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.122698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.122744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.122770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.123368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.123934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.123951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.123960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.123968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.135296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.135720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.135738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.135747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.135920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.136101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.136111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.136119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.136125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.148379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.148737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.148756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.148764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.148942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.149128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.149139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.149145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.149152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.161483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.161914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.161932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.161940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.162124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.162303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.162314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.162321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.162328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.174649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.175083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.175103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.175112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.175291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.175470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.175480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.175487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.175493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.187703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.188116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.188138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.188147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.188326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.188504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.188515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.188522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.188528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.201046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.201419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.201437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.269 [2024-11-20 12:37:15.201447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.269 [2024-11-20 12:37:15.201639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.269 [2024-11-20 12:37:15.201819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.269 [2024-11-20 12:37:15.201829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.269 [2024-11-20 12:37:15.201837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.269 [2024-11-20 12:37:15.201844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.269 [2024-11-20 12:37:15.214292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.269 [2024-11-20 12:37:15.214656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.269 [2024-11-20 12:37:15.214674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.214683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.214867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.215059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.215071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.215078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.215087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.227355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.227724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.227743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.227752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.227939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.228132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.228144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.228151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.228160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.240446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.240801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.240820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.240828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.241012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.241190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.241201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.241208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.241215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.253504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.253891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.253909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.253917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.254102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.254282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.254292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.254299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.254306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.266567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.266998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.267017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.267026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.267200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.267372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.267382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.267393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.267400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.279548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.279970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.279988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.279997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.280170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.280343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.280353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.280360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.280367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.292532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.292920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.292937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.292945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.293115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.293280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.293290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.293296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.293303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.305442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.305893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.305936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.305971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.306408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.306583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.306594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.306601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.306608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.318435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.318860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.318905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.318930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.319346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.319511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.319520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.319527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.319534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.331486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.331887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.331904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.331912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.332098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.332271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.332281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.332288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.332294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.344426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.344850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.344896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.344921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.345427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.345592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.345602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.345609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.345616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.357328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.357766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.357788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.357796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.357964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.358128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.358137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.358144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.358150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.370289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.370662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.370679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.370687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.270 [2024-11-20 12:37:15.370850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.270 [2024-11-20 12:37:15.371020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.270 [2024-11-20 12:37:15.371030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.270 [2024-11-20 12:37:15.371037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.270 [2024-11-20 12:37:15.371043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.270 [2024-11-20 12:37:15.383454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.270 [2024-11-20 12:37:15.383912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.270 [2024-11-20 12:37:15.383931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.270 [2024-11-20 12:37:15.383940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.384129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.384318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.384334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.384346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.384358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.396328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.396613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.396630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.396638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.396811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.396994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.397004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.397012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.397020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.409281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.409631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.409648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.409656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.409819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.410004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.410015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.410022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.410029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.422095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.422452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.422469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.422478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.422649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.422823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.422833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.422839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.422847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.435085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.435428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.435445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.435453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.435616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.435780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.435789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.435799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.435806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.448075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.448479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.448524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.448550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.449142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.449636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.449645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.449652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.449658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.460958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.461279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.461296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.461304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.461467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.461630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.461639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.461646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.461652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.473983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.474305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.474322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.474331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.474493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.474655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.474665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.474671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.474678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.486946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.532 [2024-11-20 12:37:15.487853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.532 [2024-11-20 12:37:15.487876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.532 [2024-11-20 12:37:15.487885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.532 [2024-11-20 12:37:15.488084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.532 [2024-11-20 12:37:15.488257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.532 [2024-11-20 12:37:15.488268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.532 [2024-11-20 12:37:15.488275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.532 [2024-11-20 12:37:15.488283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.532 [2024-11-20 12:37:15.500066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.500359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.500377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.500386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.500565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.500744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.500765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.500773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.500796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.512912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.513339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.513359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.513368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.513540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.513714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.513725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.513731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.513738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.526035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.526322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.526341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.526353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.526533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.526713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.526724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.526731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.526738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.539088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.539371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.539389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.539398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.539576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.539754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.539765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.539772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.539780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.552127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.552538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.552556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.552565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.552743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.552922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.552933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.552940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.552953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.565176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.565613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.565631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.565639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.565812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.566049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.566061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.566068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.566076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.578112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.578529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.578585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.578610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.579135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.579322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.579333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.579340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.579347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.591026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.591386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.591405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.591413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.591585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.591760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.591770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.591777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.591783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.604032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.604476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.604520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.604545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.605061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.605236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.605246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.605256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.533 [2024-11-20 12:37:15.605264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.533 [2024-11-20 12:37:15.616971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.533 [2024-11-20 12:37:15.617388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.533 [2024-11-20 12:37:15.617433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.533 [2024-11-20 12:37:15.617459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.533 [2024-11-20 12:37:15.618045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.533 [2024-11-20 12:37:15.618219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.533 [2024-11-20 12:37:15.618230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.533 [2024-11-20 12:37:15.618237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.534 [2024-11-20 12:37:15.618244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.534 [2024-11-20 12:37:15.629846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.534 [2024-11-20 12:37:15.630270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.534 [2024-11-20 12:37:15.630288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.534 [2024-11-20 12:37:15.630296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.534 [2024-11-20 12:37:15.630459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.534 [2024-11-20 12:37:15.630623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.534 [2024-11-20 12:37:15.630633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.534 [2024-11-20 12:37:15.630639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.534 [2024-11-20 12:37:15.630646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.534 [2024-11-20 12:37:15.643067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.534 [2024-11-20 12:37:15.643412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.534 [2024-11-20 12:37:15.643429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.534 [2024-11-20 12:37:15.643439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.534 [2024-11-20 12:37:15.643617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.534 [2024-11-20 12:37:15.643796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.534 [2024-11-20 12:37:15.643806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.534 [2024-11-20 12:37:15.643813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.534 [2024-11-20 12:37:15.643820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.796 [2024-11-20 12:37:15.655987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.796 [2024-11-20 12:37:15.656408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.796 [2024-11-20 12:37:15.656426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.796 [2024-11-20 12:37:15.656434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.796 [2024-11-20 12:37:15.656598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.796 [2024-11-20 12:37:15.656762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.796 [2024-11-20 12:37:15.656772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.796 [2024-11-20 12:37:15.656778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.796 [2024-11-20 12:37:15.656784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 595671 Killed "${NVMF_APP[@]}" "$@" 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=597073 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 597073 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 597073 ']' 00:27:32.796 [2024-11-20 12:37:15.669161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.796 [2024-11-20 12:37:15.669570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.796 [2024-11-20 12:37:15.669589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.796 [2024-11-20 12:37:15.669597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.796 [2024-11-20 12:37:15.669775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.796 [2024-11-20 12:37:15.669961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.796 [2024-11-20 12:37:15.669972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.796 [2024-11-20 12:37:15.669980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.796 [2024-11-20 12:37:15.669987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.796 12:37:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.796 [2024-11-20 12:37:15.682330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.796 [2024-11-20 12:37:15.682760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.796 [2024-11-20 12:37:15.682776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.796 [2024-11-20 12:37:15.682784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.796 [2024-11-20 12:37:15.682967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.796 [2024-11-20 12:37:15.683146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.796 [2024-11-20 12:37:15.683155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.796 [2024-11-20 12:37:15.683162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.796 [2024-11-20 12:37:15.683169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.796 [2024-11-20 12:37:15.695519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.796 [2024-11-20 12:37:15.695958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.796 [2024-11-20 12:37:15.695976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.796 [2024-11-20 12:37:15.695984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.796 [2024-11-20 12:37:15.696162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.796 [2024-11-20 12:37:15.696347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.796 [2024-11-20 12:37:15.696356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.796 [2024-11-20 12:37:15.696362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.796 [2024-11-20 12:37:15.696369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.796 [2024-11-20 12:37:15.708467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.796 [2024-11-20 12:37:15.708875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.796 [2024-11-20 12:37:15.708892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.796 [2024-11-20 12:37:15.708900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.796 [2024-11-20 12:37:15.709079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.796 [2024-11-20 12:37:15.709253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.796 [2024-11-20 12:37:15.709262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.796 [2024-11-20 12:37:15.709269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.796 [2024-11-20 12:37:15.709275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.796 [2024-11-20 12:37:15.716476] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:32.796 [2024-11-20 12:37:15.716514] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.796 [2024-11-20 12:37:15.721532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.721969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.721987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.721995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.722186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.722365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.722374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.722380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.722387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.734514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.734927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.734944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.734957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.735151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.735328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.735337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.735344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.735351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.747505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.747963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.747983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.747991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.748186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.748363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.748372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.748380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.748386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.760846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.761288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.761306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.761316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.761494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.761673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.761682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.761689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.761696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.773875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.774339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.774357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.774365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.774543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.774719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.774728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.774735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.774741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.786876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.787309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.787327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.787335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.787506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.787681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.787691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.787697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.787703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.799178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:32.797 [2024-11-20 12:37:15.800015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.800450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.800467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.800478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.800655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.800832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.800841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.800848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.800854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.812999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.813476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.813495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.813505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.813678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.813851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.813860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.813868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.813876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.825998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.826353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.826371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.826378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.826552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.826723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.826733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.826739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.797 [2024-11-20 12:37:15.826746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.797 [2024-11-20 12:37:15.839090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.797 [2024-11-20 12:37:15.839551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.797 [2024-11-20 12:37:15.839568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.797 [2024-11-20 12:37:15.839576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.797 [2024-11-20 12:37:15.839749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.797 [2024-11-20 12:37:15.839944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.797 [2024-11-20 12:37:15.839957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.797 [2024-11-20 12:37:15.839964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.798 [2024-11-20 12:37:15.839971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.798 [2024-11-20 12:37:15.840441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.798 [2024-11-20 12:37:15.840464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.798 [2024-11-20 12:37:15.840472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.798 [2024-11-20 12:37:15.840478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.798 [2024-11-20 12:37:15.840484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.798 [2024-11-20 12:37:15.841886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.798 [2024-11-20 12:37:15.842030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.798 [2024-11-20 12:37:15.842031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.798 [2024-11-20 12:37:15.852153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.798 [2024-11-20 12:37:15.852532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.798 [2024-11-20 12:37:15.852552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.798 [2024-11-20 12:37:15.852561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.798 [2024-11-20 12:37:15.852740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.798 [2024-11-20 12:37:15.852919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.798 [2024-11-20 12:37:15.852928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.798 [2024-11-20 12:37:15.852936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.798 [2024-11-20 12:37:15.852942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.798 [2024-11-20 12:37:15.865257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.798 [2024-11-20 12:37:15.865691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.798 [2024-11-20 12:37:15.865711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.798 [2024-11-20 12:37:15.865719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.798 [2024-11-20 12:37:15.865897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.798 [2024-11-20 12:37:15.866081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.798 [2024-11-20 12:37:15.866090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.798 [2024-11-20 12:37:15.866098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.798 [2024-11-20 12:37:15.866105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.798 [2024-11-20 12:37:15.878443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.798 [2024-11-20 12:37:15.878907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.798 [2024-11-20 12:37:15.878927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.798 [2024-11-20 12:37:15.878936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.798 [2024-11-20 12:37:15.879119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.798 [2024-11-20 12:37:15.879298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.798 [2024-11-20 12:37:15.879307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.798 [2024-11-20 12:37:15.879315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.798 [2024-11-20 12:37:15.879322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.798 [2024-11-20 12:37:15.891638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.798 [2024-11-20 12:37:15.891992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.798 [2024-11-20 12:37:15.892013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.798 [2024-11-20 12:37:15.892022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.798 [2024-11-20 12:37:15.892200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.798 [2024-11-20 12:37:15.892379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.798 [2024-11-20 12:37:15.892388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.798 [2024-11-20 12:37:15.892395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.798 [2024-11-20 12:37:15.892402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.798 [2024-11-20 12:37:15.904778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.798 [2024-11-20 12:37:15.905212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.798 [2024-11-20 12:37:15.905233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:32.798 [2024-11-20 12:37:15.905242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:32.798 [2024-11-20 12:37:15.905421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:32.798 [2024-11-20 12:37:15.905600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.798 [2024-11-20 12:37:15.905609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.798 [2024-11-20 12:37:15.905616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.798 [2024-11-20 12:37:15.905623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.917873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.918336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.918354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.918363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.918544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.918723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.918732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.918739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.918746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.931071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.931503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.931521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.931530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.931707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.931885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.931896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.931903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.931910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.944233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.944685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.944703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.944711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.944889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.945071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.945081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.945088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.945095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.957424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.957858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.957875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.957883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.958066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.958246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.958258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.958265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.958272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.970580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.970930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.970952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.970961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.971139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.971317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.971326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.971334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.971340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.983671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.984081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.984098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.984106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.984284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.984464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.984472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.984479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.984485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:15.996812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:15.997254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:15.997271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:15.997279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:15.997457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:15.997635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:15.997644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:15.997651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:15.997661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:16.009852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:16.010223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:16.010240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:16.010249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:16.010426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:16.010605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:16.010614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:16.010621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:16.010628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:16.022962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:16.023397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.060 [2024-11-20 12:37:16.023414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.060 [2024-11-20 12:37:16.023422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.060 [2024-11-20 12:37:16.023600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.060 [2024-11-20 12:37:16.023779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.060 [2024-11-20 12:37:16.023788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.060 [2024-11-20 12:37:16.023794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.060 [2024-11-20 12:37:16.023801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.060 [2024-11-20 12:37:16.036132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.060 [2024-11-20 12:37:16.036564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.036582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.036590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.036768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.036952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.036961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.036970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.036977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.049302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.049653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.049674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.049682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.049859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.050040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.050049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.050057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.050063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.062368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.062804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.062821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.062829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.063010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.063189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.063197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.063204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.063211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.075533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.075972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.075991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.075998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.076176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.076354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.076362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.076368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.076375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.088692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.089099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.089117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.089125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.089305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.089483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.089491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.089498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.089504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 4714.67 IOPS, 18.42 MiB/s [2024-11-20T11:37:16.177Z] [2024-11-20 12:37:16.103143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.103587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.103603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.103611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.103788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.103969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.103979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.103986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.103992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.116305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.116661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.116678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.116686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.116862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.117045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.117055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.117061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.117068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.129386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.129819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.129836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.129844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.130025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.130204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.130216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.130222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.130229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.142536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.142967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.142984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.142992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.143168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.143346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.143355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.143361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.143368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.155682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.156104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.061 [2024-11-20 12:37:16.156121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.061 [2024-11-20 12:37:16.156129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.061 [2024-11-20 12:37:16.156306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.061 [2024-11-20 12:37:16.156484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.061 [2024-11-20 12:37:16.156492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.061 [2024-11-20 12:37:16.156498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.061 [2024-11-20 12:37:16.156505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.061 [2024-11-20 12:37:16.168846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.061 [2024-11-20 12:37:16.169108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.062 [2024-11-20 12:37:16.169132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.062 [2024-11-20 12:37:16.169140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.062 [2024-11-20 12:37:16.169321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.062 [2024-11-20 12:37:16.169501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.062 [2024-11-20 12:37:16.169510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.062 [2024-11-20 12:37:16.169517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.062 [2024-11-20 12:37:16.169530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.322 [2024-11-20 12:37:16.181940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.322 [2024-11-20 12:37:16.182383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.322 [2024-11-20 12:37:16.182400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.322 [2024-11-20 12:37:16.182408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.322 [2024-11-20 12:37:16.182585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.322 [2024-11-20 12:37:16.182762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.322 [2024-11-20 12:37:16.182770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.322 [2024-11-20 12:37:16.182777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.322 [2024-11-20 12:37:16.182784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.322 [2024-11-20 12:37:16.195113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.322 [2024-11-20 12:37:16.195551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.322 [2024-11-20 12:37:16.195568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.322 [2024-11-20 12:37:16.195575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.322 [2024-11-20 12:37:16.195752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.322 [2024-11-20 12:37:16.195928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.322 [2024-11-20 12:37:16.195936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.322 [2024-11-20 12:37:16.195943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.322 [2024-11-20 12:37:16.195955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.322 [2024-11-20 12:37:16.208333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.322 [2024-11-20 12:37:16.208768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.322 [2024-11-20 12:37:16.208786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.322 [2024-11-20 12:37:16.208793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.322 [2024-11-20 12:37:16.208975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.322 [2024-11-20 12:37:16.209153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.322 [2024-11-20 12:37:16.209162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.322 [2024-11-20 12:37:16.209169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.322 [2024-11-20 12:37:16.209175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.322 [2024-11-20 12:37:16.221499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.322 [2024-11-20 12:37:16.221909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.322 [2024-11-20 12:37:16.221930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.322 [2024-11-20 12:37:16.221938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.322 [2024-11-20 12:37:16.222118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.322 [2024-11-20 12:37:16.222300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.322 [2024-11-20 12:37:16.222308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.322 [2024-11-20 12:37:16.222315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.322 [2024-11-20 12:37:16.222321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.322 [2024-11-20 12:37:16.234637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.322 [2024-11-20 12:37:16.234888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.322 [2024-11-20 12:37:16.234905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.322 [2024-11-20 12:37:16.234912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.322 [2024-11-20 12:37:16.235093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.322 [2024-11-20 12:37:16.235271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.322 [2024-11-20 12:37:16.235279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.322 [2024-11-20 12:37:16.235286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.322 [2024-11-20 12:37:16.235292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.322 [2024-11-20 12:37:16.247755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.322 [2024-11-20 12:37:16.248194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.322 [2024-11-20 12:37:16.248211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.322 [2024-11-20 12:37:16.248219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.322 [2024-11-20 12:37:16.248395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.322 [2024-11-20 12:37:16.248574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.248582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.248590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.248598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.260918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.261370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.261387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.261394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.261574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.261752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.261760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.261767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.261773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.274100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.274508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.274525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.274533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.274710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.274887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.274895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.274902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.274908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.287216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.287571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.287588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.287595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.287772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.287955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.287964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.287971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.287977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.300302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.300737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.300755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.300762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.300940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.301123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.301135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.301142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.301148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.313456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.313889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.313906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.313914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.314095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.314273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.314282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.314289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.314296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.326621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.327061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.327079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.327087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.327264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.327443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.327451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.327458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.327465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.339804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.340152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.340169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.340176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.340354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.340532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.340541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.340547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.340554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.352871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.353243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.353261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.353269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.353447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.353625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.353636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.353643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.353650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.365982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.366390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.366407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.366415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.366592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.323 [2024-11-20 12:37:16.366771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.323 [2024-11-20 12:37:16.366779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.323 [2024-11-20 12:37:16.366786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.323 [2024-11-20 12:37:16.366792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.323 [2024-11-20 12:37:16.379125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.323 [2024-11-20 12:37:16.379536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.323 [2024-11-20 12:37:16.379553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.323 [2024-11-20 12:37:16.379561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.323 [2024-11-20 12:37:16.379738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.324 [2024-11-20 12:37:16.379916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.324 [2024-11-20 12:37:16.379925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.324 [2024-11-20 12:37:16.379932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.324 [2024-11-20 12:37:16.379938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.324 [2024-11-20 12:37:16.392274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.324 [2024-11-20 12:37:16.392703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.324 [2024-11-20 12:37:16.392723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.324 [2024-11-20 12:37:16.392731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.324 [2024-11-20 12:37:16.392907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.324 [2024-11-20 12:37:16.393088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.324 [2024-11-20 12:37:16.393098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.324 [2024-11-20 12:37:16.393104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.324 [2024-11-20 12:37:16.393111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.324 [2024-11-20 12:37:16.405455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.324 [2024-11-20 12:37:16.405885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.324 [2024-11-20 12:37:16.405901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.324 [2024-11-20 12:37:16.405909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.324 [2024-11-20 12:37:16.406092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.324 [2024-11-20 12:37:16.406272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.324 [2024-11-20 12:37:16.406281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.324 [2024-11-20 12:37:16.406287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.324 [2024-11-20 12:37:16.406294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.324 [2024-11-20 12:37:16.418617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.324 [2024-11-20 12:37:16.419049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.324 [2024-11-20 12:37:16.419067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.324 [2024-11-20 12:37:16.419075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.324 [2024-11-20 12:37:16.419252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.324 [2024-11-20 12:37:16.419430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.324 [2024-11-20 12:37:16.419439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.324 [2024-11-20 12:37:16.419445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.324 [2024-11-20 12:37:16.419451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.324 [2024-11-20 12:37:16.431778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.324 [2024-11-20 12:37:16.432224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.324 [2024-11-20 12:37:16.432244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.324 [2024-11-20 12:37:16.432252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.324 [2024-11-20 12:37:16.432435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.324 [2024-11-20 12:37:16.432614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.324 [2024-11-20 12:37:16.432623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.324 [2024-11-20 12:37:16.432629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.324 [2024-11-20 12:37:16.432636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.584 [2024-11-20 12:37:16.444878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.584 [2024-11-20 12:37:16.445326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.584 [2024-11-20 12:37:16.445343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.584 [2024-11-20 12:37:16.445352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.584 [2024-11-20 12:37:16.445529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.584 [2024-11-20 12:37:16.445707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.584 [2024-11-20 12:37:16.445715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.584 [2024-11-20 12:37:16.445722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.584 [2024-11-20 12:37:16.445729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.584 [2024-11-20 12:37:16.458057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.584 [2024-11-20 12:37:16.458491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.584 [2024-11-20 12:37:16.458509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.584 [2024-11-20 12:37:16.458516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.584 [2024-11-20 12:37:16.458694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.584 [2024-11-20 12:37:16.458871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.584 [2024-11-20 12:37:16.458880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.584 [2024-11-20 12:37:16.458887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.584 [2024-11-20 12:37:16.458894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.584 [2024-11-20 12:37:16.471219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.584 [2024-11-20 12:37:16.471632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.584 [2024-11-20 12:37:16.471650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.584 [2024-11-20 12:37:16.471658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.584 [2024-11-20 12:37:16.471835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.584 [2024-11-20 12:37:16.472017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.584 [2024-11-20 12:37:16.472027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.584 [2024-11-20 12:37:16.472037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.584 [2024-11-20 12:37:16.472045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.584 [2024-11-20 12:37:16.484356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.584 [2024-11-20 12:37:16.484768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.584 [2024-11-20 12:37:16.484786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.584 [2024-11-20 12:37:16.484793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.584 [2024-11-20 12:37:16.484975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.584 [2024-11-20 12:37:16.485153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.584 [2024-11-20 12:37:16.485163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.584 [2024-11-20 12:37:16.485171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.584 [2024-11-20 12:37:16.485178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.584 [2024-11-20 12:37:16.497504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.584 [2024-11-20 12:37:16.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.584 [2024-11-20 12:37:16.497932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.584 [2024-11-20 12:37:16.497940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.584 [2024-11-20 12:37:16.498123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.584 [2024-11-20 12:37:16.498301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.584 [2024-11-20 12:37:16.498311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.584 [2024-11-20 12:37:16.498317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.584 [2024-11-20 12:37:16.498324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.584 [2024-11-20 12:37:16.510672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.584 [2024-11-20 12:37:16.511105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.511124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 [2024-11-20 12:37:16.511132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.511309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 [2024-11-20 12:37:16.511488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.511498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.511505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.511511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 [2024-11-20 12:37:16.523845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.585 [2024-11-20 12:37:16.524197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.524214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 [2024-11-20 12:37:16.524222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.524398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 [2024-11-20 12:37:16.524575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.524583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.524590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.524597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 [2024-11-20 12:37:16.536919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.585 [2024-11-20 12:37:16.537329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.537346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 [2024-11-20 12:37:16.537354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.537530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 [2024-11-20 12:37:16.537708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.537717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.537723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.537730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 [2024-11-20 12:37:16.550041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.585 [2024-11-20 12:37:16.550433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.550450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.585 [2024-11-20 12:37:16.550457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.550640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:33.585 [2024-11-20 12:37:16.550819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.550828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.550835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.550842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.585 [2024-11-20 12:37:16.563169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.585 [2024-11-20 12:37:16.563455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.563471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 [2024-11-20 12:37:16.563479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.563656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 [2024-11-20 12:37:16.563834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.563843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.563849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.563856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 [2024-11-20 12:37:16.576339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.585 [2024-11-20 12:37:16.576639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.576655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 [2024-11-20 12:37:16.576663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.576841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 [2024-11-20 12:37:16.577024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.577035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.577042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.577049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.585 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.585 [2024-11-20 12:37:16.589395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.585 [2024-11-20 12:37:16.589810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.585 [2024-11-20 12:37:16.589827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.585 [2024-11-20 12:37:16.589835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.585 [2024-11-20 12:37:16.590045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.585 [2024-11-20 12:37:16.590225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.585 [2024-11-20 12:37:16.590234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.585 [2024-11-20 12:37:16.590246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.585 [2024-11-20 12:37:16.590253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.585 [2024-11-20 12:37:16.592130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.586 [2024-11-20 12:37:16.602605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.586 [2024-11-20 12:37:16.602900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.586 [2024-11-20 12:37:16.602917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.586 [2024-11-20 12:37:16.602925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.586 [2024-11-20 12:37:16.603107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.586 [2024-11-20 12:37:16.603290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.586 [2024-11-20 12:37:16.603299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.586 [2024-11-20 12:37:16.603305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.586 [2024-11-20 12:37:16.603312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.586 [2024-11-20 12:37:16.615675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.586 [2024-11-20 12:37:16.616033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.586 [2024-11-20 12:37:16.616051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.586 [2024-11-20 12:37:16.616060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.586 [2024-11-20 12:37:16.616238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.586 [2024-11-20 12:37:16.616415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.586 [2024-11-20 12:37:16.616425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.586 [2024-11-20 12:37:16.616432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.586 [2024-11-20 12:37:16.616439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.586 [2024-11-20 12:37:16.628778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.586 [2024-11-20 12:37:16.629071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.586 [2024-11-20 12:37:16.629089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.586 [2024-11-20 12:37:16.629098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.586 [2024-11-20 12:37:16.629275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.586 [2024-11-20 12:37:16.629454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.586 [2024-11-20 12:37:16.629467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.586 [2024-11-20 12:37:16.629474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.586 [2024-11-20 12:37:16.629481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.586 Malloc0 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.586 [2024-11-20 12:37:16.641817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.586 [2024-11-20 12:37:16.642106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.586 [2024-11-20 12:37:16.642123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.586 [2024-11-20 12:37:16.642131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.586 [2024-11-20 12:37:16.642308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.586 [2024-11-20 12:37:16.642486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.586 [2024-11-20 12:37:16.642495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.586 [2024-11-20 12:37:16.642502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.586 [2024-11-20 12:37:16.642508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.586 [2024-11-20 12:37:16.655012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.586 [2024-11-20 12:37:16.655308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.586 [2024-11-20 12:37:16.655325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf55500 with addr=10.0.0.2, port=4420 00:27:33.586 [2024-11-20 12:37:16.655332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55500 is same with the state(6) to be set 00:27:33.586 [2024-11-20 12:37:16.655509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf55500 (9): Bad file descriptor 00:27:33.586 [2024-11-20 12:37:16.655687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:33.586 [2024-11-20 12:37:16.655696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:33.586 [2024-11-20 12:37:16.655703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:33.586 [2024-11-20 12:37:16.655709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.586 [2024-11-20 12:37:16.660054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.586 12:37:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 596046 00:27:33.586 [2024-11-20 12:37:16.668049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:33.846 [2024-11-20 12:37:16.737073] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:35.042 4623.43 IOPS, 18.06 MiB/s [2024-11-20T11:37:19.535Z] 5406.25 IOPS, 21.12 MiB/s [2024-11-20T11:37:20.472Z] 6048.44 IOPS, 23.63 MiB/s [2024-11-20T11:37:21.410Z] 6557.90 IOPS, 25.62 MiB/s [2024-11-20T11:37:22.346Z] 6978.36 IOPS, 27.26 MiB/s [2024-11-20T11:37:23.283Z] 7330.58 IOPS, 28.64 MiB/s [2024-11-20T11:37:24.219Z] 7618.92 IOPS, 29.76 MiB/s [2024-11-20T11:37:25.161Z] 7873.43 IOPS, 30.76 MiB/s [2024-11-20T11:37:25.161Z] 8095.53 IOPS, 31.62 MiB/s 00:27:42.045 Latency(us) 00:27:42.045 [2024-11-20T11:37:25.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.045 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:42.045 Verification LBA range: start 0x0 length 0x4000 00:27:42.045 Nvme1n1 : 15.01 8100.48 31.64 12854.64 0.00 6088.49 448.78 25302.59 00:27:42.045 [2024-11-20T11:37:25.161Z] =================================================================================================================== 00:27:42.045 [2024-11-20T11:37:25.161Z] Total : 8100.48 31.64 12854.64 0.00 6088.49 448.78 25302.59 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:42.304 rmmod nvme_tcp 00:27:42.304 rmmod nvme_fabrics 00:27:42.304 rmmod nvme_keyring 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 597073 ']' 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 597073 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 597073 ']' 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 597073 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.304 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597073 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597073' 00:27:42.564 killing process with pid 597073 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 597073 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 597073 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.564 12:37:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.104 00:27:45.104 real 0m26.213s 00:27:45.104 user 1m1.467s 00:27:45.104 sys 0m6.738s 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:45.104 ************************************ 00:27:45.104 END TEST nvmf_bdevperf 00:27:45.104 ************************************ 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.104 ************************************ 00:27:45.104 START TEST nvmf_target_disconnect 00:27:45.104 ************************************ 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:45.104 * Looking for test storage... 00:27:45.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:45.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.104 --rc genhtml_branch_coverage=1 00:27:45.104 --rc genhtml_function_coverage=1 00:27:45.104 --rc genhtml_legend=1 00:27:45.104 --rc geninfo_all_blocks=1 00:27:45.104 --rc geninfo_unexecuted_blocks=1 00:27:45.104 00:27:45.104 ' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:45.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.104 --rc genhtml_branch_coverage=1 00:27:45.104 --rc genhtml_function_coverage=1 00:27:45.104 --rc genhtml_legend=1 00:27:45.104 --rc geninfo_all_blocks=1 00:27:45.104 --rc geninfo_unexecuted_blocks=1 00:27:45.104 00:27:45.104 ' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:45.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.104 --rc genhtml_branch_coverage=1 00:27:45.104 --rc genhtml_function_coverage=1 00:27:45.104 --rc genhtml_legend=1 00:27:45.104 --rc geninfo_all_blocks=1 00:27:45.104 --rc geninfo_unexecuted_blocks=1 00:27:45.104 00:27:45.104 ' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:45.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.104 --rc genhtml_branch_coverage=1 00:27:45.104 --rc genhtml_function_coverage=1 00:27:45.104 --rc genhtml_legend=1 00:27:45.104 --rc geninfo_all_blocks=1 00:27:45.104 --rc geninfo_unexecuted_blocks=1 00:27:45.104 00:27:45.104 ' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.104 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.105 12:37:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.679 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:51.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:51.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:51.680 Found net devices under 0000:86:00.0: cvl_0_0 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:51.680 Found net devices under 0000:86:00.1: cvl_0_1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:27:51.680 00:27:51.680 --- 10.0.0.2 ping statistics --- 00:27:51.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.680 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:27:51.680 00:27:51.680 --- 10.0.0.1 ping statistics --- 00:27:51.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.680 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:51.680 ************************************ 00:27:51.680 START TEST nvmf_target_disconnect_tc1 00:27:51.680 ************************************ 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.680 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:51.681 12:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:51.681 [2024-11-20 12:37:34.023881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.681 [2024-11-20 12:37:34.024008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaeaab0 with addr=10.0.0.2, port=4420 00:27:51.681 [2024-11-20 12:37:34.024052] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:51.681 [2024-11-20 12:37:34.024086] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:51.681 [2024-11-20 12:37:34.024105] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:51.681 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:51.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:51.681 Initializing NVMe Controllers 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:51.681 00:27:51.681 real 0m0.118s 00:27:51.681 user 0m0.052s 00:27:51.681 sys 0m0.066s 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 ************************************ 00:27:51.681 END TEST nvmf_target_disconnect_tc1 00:27:51.681 ************************************ 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 ************************************ 00:27:51.681 START TEST nvmf_target_disconnect_tc2 00:27:51.681 ************************************ 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=602160 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 602160 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 602160 ']' 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 [2024-11-20 12:37:34.163304] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:51.681 [2024-11-20 12:37:34.163345] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.681 [2024-11-20 12:37:34.244846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.681 [2024-11-20 12:37:34.287389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.681 [2024-11-20 12:37:34.287432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.681 [2024-11-20 12:37:34.287439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.681 [2024-11-20 12:37:34.287445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.681 [2024-11-20 12:37:34.287450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.681 [2024-11-20 12:37:34.289108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:51.681 [2024-11-20 12:37:34.289215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:51.681 [2024-11-20 12:37:34.289320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:51.681 [2024-11-20 12:37:34.289321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 Malloc0 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 [2024-11-20 12:37:34.471353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.681 [2024-11-20 12:37:34.503590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.681 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.682 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.682 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=602267 00:27:51.682 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:51.682 12:37:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.597 12:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 602160 00:27:53.598 12:37:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 [2024-11-20 12:37:36.532108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 [2024-11-20 12:37:36.532317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Read completed with error (sct=0, sc=8) 00:27:53.598 starting I/O failed 00:27:53.598 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 [2024-11-20 12:37:36.532516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Write completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 Read completed with error (sct=0, sc=8) 00:27:53.599 starting I/O failed 00:27:53.599 [2024-11-20 12:37:36.532713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:53.599 [2024-11-20 12:37:36.532905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.532928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.533922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.533931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.534011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.534028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.534120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.534129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.534193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.534203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.534282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.534292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.534378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.534388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.599 qpair failed and we were unable to recover it. 00:27:53.599 [2024-11-20 12:37:36.534467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.599 [2024-11-20 12:37:36.534476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.534645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.534654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.534741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.534750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.534829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.534839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.534910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.534919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.534977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.534987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.535872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.535882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.536920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.536990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.537000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.537144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.537154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.537276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.537287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.537344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.537353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.600 [2024-11-20 12:37:36.537437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.600 [2024-11-20 12:37:36.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.600 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.537516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.537525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.537595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.537605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.537731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.537741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.537799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.537809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.537880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.537890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.537956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.537967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.538965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.538975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.539116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.539323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.539534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.539618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.539758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.539855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.539992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.601 [2024-11-20 12:37:36.540796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.601 qpair failed and we were unable to recover it. 00:27:53.601 [2024-11-20 12:37:36.540852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.540862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.540922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.540932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.541858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.541997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.542962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.542972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.543884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.543894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.544032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.544042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.544167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.602 [2024-11-20 12:37:36.544176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.602 qpair failed and we were unable to recover it. 00:27:53.602 [2024-11-20 12:37:36.544251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.544512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.544752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.544899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.544972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.544983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.545893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.546974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.546985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.547051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.547061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.547151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.547161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.547236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.547247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.547372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.547382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.603 qpair failed and we were unable to recover it. 00:27:53.603 [2024-11-20 12:37:36.547443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.603 [2024-11-20 12:37:36.547453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.547519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.547529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.547685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.547696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.547823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.547834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.547914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.547928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.548961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.548980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.549934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.549953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.550088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.550102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.550185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.550199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.550455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.550469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.550559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.550573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.550710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.550723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.550891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.550921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.551170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.604 [2024-11-20 12:37:36.551201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.604 qpair failed and we were unable to recover it. 00:27:53.604 [2024-11-20 12:37:36.551462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.551493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.551679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.551709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.551887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.551918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.552114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.552129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.552335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.552366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.552538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.552569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.552697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.552728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.552858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.552890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.552997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.553012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.553147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.553161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.553367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.553399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.553658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.553690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.553870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.553901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.554006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.554020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.554155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.554169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.554341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.554372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.554631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.554661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.554870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.554900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.555077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.555108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.555237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.555267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.555491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.555505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.555651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.555810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.555824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.555976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.555994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.556076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.556091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.556236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.556250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.556455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.556486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.556670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.556701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.556831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.556862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.557098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.557113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.557213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.557227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.557324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.557338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.557595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.605 [2024-11-20 12:37:36.557610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.605 qpair failed and we were unable to recover it. 00:27:53.605 [2024-11-20 12:37:36.557792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.557824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.557960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.557992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.558234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.558265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.558462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.558493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.558709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.558740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.558854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.558886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.559069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.559101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.559289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.559320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.559583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.559613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.559782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.559813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.559981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.560014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.560180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.560211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.560381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.560413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.560598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.560629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.560745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.560776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.560898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.560929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.561151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.561182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.561370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.561400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.561638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.561668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.561851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.561881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.562005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.562038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.562273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.562304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.562548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.562579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.562758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.562789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.562964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.562996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.563165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.563196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.563329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.563360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.563491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.563521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.563752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.563783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.564046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.606 [2024-11-20 12:37:36.564077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.606 qpair failed and we were unable to recover it. 00:27:53.606 [2024-11-20 12:37:36.564202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.564239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.564432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.564463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.564578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.564608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.564798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.564829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.565029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.565061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.565244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.565274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.565481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.565512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.565704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.565735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.565842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.565872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.566089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.566121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.566309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.566340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.566469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.566502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.566738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.566769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.566959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.566990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.567176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.567208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.567331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.567361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.567481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.567512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.567759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.567789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.567967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.568000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.568180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.568211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.568386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.568416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.568656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.568687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.568969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.569001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.569218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.569488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.569520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.569704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.569736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.569902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.569934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.570216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.570249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.570514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.570544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.570710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.570742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.607 qpair failed and we were unable to recover it. 00:27:53.607 [2024-11-20 12:37:36.570873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.607 [2024-11-20 12:37:36.570903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.571085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.571118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.571378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.571410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.571529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.571560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.571853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.572151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.572183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.572322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.572353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.572530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.572561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.572751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.572782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.572915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.572946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.573076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.573112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.573288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.573320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.573444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.573475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.573608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.573639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.573832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.573862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.574044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.574076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.574317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.574348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.574523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.574553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.574676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.574707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.574892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.574923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.575194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.575225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.575361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.575391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.575588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.575618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.575722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.575752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.576018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.576051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.576258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.576289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.576472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.576502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.576649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.576788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.576819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.576994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.577027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.577212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.577243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.577530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.608 [2024-11-20 12:37:36.577562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.608 qpair failed and we were unable to recover it. 00:27:53.608 [2024-11-20 12:37:36.577813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.577844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.578135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.578168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.578338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.578369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.578583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.578614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.578868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.578900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.579179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.579210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.579342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.579373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.579551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.579583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.579765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.579794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.579922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.579963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.580067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.580099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.580336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.580366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.580549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.580580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.580816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.580847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.581092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.581125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.581379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.581410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.581542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.581573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.581696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.581728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.581907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.581956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.582202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.582234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.582473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.582505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.582686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.582717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.582992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.583183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.583214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.583469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.609 [2024-11-20 12:37:36.583499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.609 qpair failed and we were unable to recover it. 00:27:53.609 [2024-11-20 12:37:36.583761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.583792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.583995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.584028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.584216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.584248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.584417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.584446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.584642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.584673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.584998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.585032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.585269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.585301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.585527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.585558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.585743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.585774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.586039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.586072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.586264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.586295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.586467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.586498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.586799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.587035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.587067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.587334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.587366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.587537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.587568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.587826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.587857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.588109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.588141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.588403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.588435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.588722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.588753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.589027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.589059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.589297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.589329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.589525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.589836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.589866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.590077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.590109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.590281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.590311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.590415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.590447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.590634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.590664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.590940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.590983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.591221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.591253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.591426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.610 [2024-11-20 12:37:36.591456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.610 qpair failed and we were unable to recover it. 00:27:53.610 [2024-11-20 12:37:36.591647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.591677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.591957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.591989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.592266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.592304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.592552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.592582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.592769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.592799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.593086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.593119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.593357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.593389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.593639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.593669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.593964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.594001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.594301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.594564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.594595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.594834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.594865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.595134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.595168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.595431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.595462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.595747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.595779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.595965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.595997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.596290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.596321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.596529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.596560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.596796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.596827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.597082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.597115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.597402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.597434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.597674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.597704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.597823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.597855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.598065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.598098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.598364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.598395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.598623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.598654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.598915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.598957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.599241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.599273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.599456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.599487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.599674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.611 [2024-11-20 12:37:36.599705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.611 qpair failed and we were unable to recover it. 00:27:53.611 [2024-11-20 12:37:36.599969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.600002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.600289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.600319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.600510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.600546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.600833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.600865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.600974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.601008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.601320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.601353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.601613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.601644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.601849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.601881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.602073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.602107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.602346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.602377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.602634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.602665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.602845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.602876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.603136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.603174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.603461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.603493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.603758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.603789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.604080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.604111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.604387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.604418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.604699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.604730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.604926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.604968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.605209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.605242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.605428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.605459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.605650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.605682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.605958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.605990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.606270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.606300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.606570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.606602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.606904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.607194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.607227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.607472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.607502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.607766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.607797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.608084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.608116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.608327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.608548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.608580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.612 [2024-11-20 12:37:36.608763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.612 [2024-11-20 12:37:36.608793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.612 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.609005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.609038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.609304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.609336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.609544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.609575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.609774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.609806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.610041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.610073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.610265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.610296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.610563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.610645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.610854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.610889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.611175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.611208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.611451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.611483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.611747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.611778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.611966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.611998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.612203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.612234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.612425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.612457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.612588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.612619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.612888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.612918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.613166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.613198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.613463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.613494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.613778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.613808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.614009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.614050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.614229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.614260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.614460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.614491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.614754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.614785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.614917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.614955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.615221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.615253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.615534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.615565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.615774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.615805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.615988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.616020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.616262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.616293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.616416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.616446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.616685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.616717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.616909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.616940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.613 [2024-11-20 12:37:36.617124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.613 [2024-11-20 12:37:36.617155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.613 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.617342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.617374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.617640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.617670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.617880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.617911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.618161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.618194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.618454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.618484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.618720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.618752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.619004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.619037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.619290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.619322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.619531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.619563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.619746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.619777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.620041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.620074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.620340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.620371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.620566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.620597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.620814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.620846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.621105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.621138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.621323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.621355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.621547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.621842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.621874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.622059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.622091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.622278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.622310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.622449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.622480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.622616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.622647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.622903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.622935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.623077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.623108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.623312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.623343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.623532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.623563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.623832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.623870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.624148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.614 [2024-11-20 12:37:36.624181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.614 qpair failed and we were unable to recover it. 00:27:53.614 [2024-11-20 12:37:36.624451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.624483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.624779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.624810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.625075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.625108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.625347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.625378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.625616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.625648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.625821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.625852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.626143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.626176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.626429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.626461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.626657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.626688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.626878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.626909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.627217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.627452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.627483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.627758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.627789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.628031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.628065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.628309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.628340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.628592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.628623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.628804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.628836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.629105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.629137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.629367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.629399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.629670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.629702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.629978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.630010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.630268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.630300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.630550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.630851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.630882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.615 qpair failed and we were unable to recover it. 00:27:53.615 [2024-11-20 12:37:36.631055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.615 [2024-11-20 12:37:36.631088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.631236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.631267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.631409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.631441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.631622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.631653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.631968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.632002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.632274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.632306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.632591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.632623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.632819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.633091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.633125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.633322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.633354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.633628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.633659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.633871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.633903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.634189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.634222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.634415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.634649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.634687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.634962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.634994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.635119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.635150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.635361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.635393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.635660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.635691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.635818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.635849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.636113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.636145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.636434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.636465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.636740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.636771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.636981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.637013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.637279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.637310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.637502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.637533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.637796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.637828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.638034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.638066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.638333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.638364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.638599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.638630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.638854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.638885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.639147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.639178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.639366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.639397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.616 qpair failed and we were unable to recover it. 00:27:53.616 [2024-11-20 12:37:36.639583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.616 [2024-11-20 12:37:36.639614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.639839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.639869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.640110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.640142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.640397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.640427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.640601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.640849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.640880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.641134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.641167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.641387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.641419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.641689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.642060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.642099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.642422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.642679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.642710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.642907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.642939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.643217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.643249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.643525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.643555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.643686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.643717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.643903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.644209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.644241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.644481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.644512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.644648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.644679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.644941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.644983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.645266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.645308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.645566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.645598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.645876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.645907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.646196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.646229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.646502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.646533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.646750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.646781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.646974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.647007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.647122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.647155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.647351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.647382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.647646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.617 [2024-11-20 12:37:36.647678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.617 qpair failed and we were unable to recover it. 00:27:53.617 [2024-11-20 12:37:36.647880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.647912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.648203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.648236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.648504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.648536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.648823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.648854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.649166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.649426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.649458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.649753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.649783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.650049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.650082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.650326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.650357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.650599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.650630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.650806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.650836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.651025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.651057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.651319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.651350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.651620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.651651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.651841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.651873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.652136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.652168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.652383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.652413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.652609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.652641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.652898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.652929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.653198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.653230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.653428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.653459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.653721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.653751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.654045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.654312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.654343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.654519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.654549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.654794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.654825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.655010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.655041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.655329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.655360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.655553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.655584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.655875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.655907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.656177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.656215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.656457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.618 [2024-11-20 12:37:36.656488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.618 qpair failed and we were unable to recover it. 00:27:53.618 [2024-11-20 12:37:36.656782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.656813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.657067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.657100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.657295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.657328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.657517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.657547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.657817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.657848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.658032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.658064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.658331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.658362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.658602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.658633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.658826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.658857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.659043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.659076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.659359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.659390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.659577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.659608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.659887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.659919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.660181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.660213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.660459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.660491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.660794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.660825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.661013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.661046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.661289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.661319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.661496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.661526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.661745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.661776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.661999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.662033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.662251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.662283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.662498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.662529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.662797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.662828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.663095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.663127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.663426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.663458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.663649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.663682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.663896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.663926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.664183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.664214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.664469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.619 [2024-11-20 12:37:36.664501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.619 qpair failed and we were unable to recover it. 00:27:53.619 [2024-11-20 12:37:36.664795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.664826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.665038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.665071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.665199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.665229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.665492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.665523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.665739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.665770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.665988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.666021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.666286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.666317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.666527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.666558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.666803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.666835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.667108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.667142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.667331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.667362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.667555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.667586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.667846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.667876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.668169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.668201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.668403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.668435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.668722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.668753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.669025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.669057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.669266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.669296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.669588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.669619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.669912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.669974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.670266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.670298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.670570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.670602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.670930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.670974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.671258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.671290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.671578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.671609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.620 [2024-11-20 12:37:36.671885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.620 [2024-11-20 12:37:36.671917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.620 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.672203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.672237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.672513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.672545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.672828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.672860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.673140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.673174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.673379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.673411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.673637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.673669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.673940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.673981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.674238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.674269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.674521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.674552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.674797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.674836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.675129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.675162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.675427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.675458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.675759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.675791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.676034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.676067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.676265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.676296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.676542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.676573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.676826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.676859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.677047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.677080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.677351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.677383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.677574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.677606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.677853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.677885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.678131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.678164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.678409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.678441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.678740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.678773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.679043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.679095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.679342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.679374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.679619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.679650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.679878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.679909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.680186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.680219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.680493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.680525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.680722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.680753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.681010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.681043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.681289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.681321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.681597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.621 [2024-11-20 12:37:36.681628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.621 qpair failed and we were unable to recover it. 00:27:53.621 [2024-11-20 12:37:36.681833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.681864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.682162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.682195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.682467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.682500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.682697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.682728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.683005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.683039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.683320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.683351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.683602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.683632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.683930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.683971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.684173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.684205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.684448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.684479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.684684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.684715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.684985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.685018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.685261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.685409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.685440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.685710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.685742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.686030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.686070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.686215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.686247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.686496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.686528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.686783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.686815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.687081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.687114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.687318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.687348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.687540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.687571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.687763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.687795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.688002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.688035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.688333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.688366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.688571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.688602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.688790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.688822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.689012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.689048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.689323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.689356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.622 [2024-11-20 12:37:36.689628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.622 [2024-11-20 12:37:36.689661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.622 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.689988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.690022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.690295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.690328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.690460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.690491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.690720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.690751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.691009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.691042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.691311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.691342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.691620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.691652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.691942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.691983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.692121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.692153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.692350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.692382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.692650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.692682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.692893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.692925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.693165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.693199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.693449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.693480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.693703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.693734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.693927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.693978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.694127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.694159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.694432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.694463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.694761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.694793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.695083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.695117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.695314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.695345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.695536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.695567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.695763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.696014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.696047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.696325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.696616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.696654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.696903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.696934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.623 [2024-11-20 12:37:36.697222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.623 [2024-11-20 12:37:36.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.623 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.697462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.697494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.697677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.697709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.697886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.697917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.698214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.698248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.698451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.698482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.698687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.698718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.698922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.698969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.699185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.699217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.699426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.699458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.699679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.699710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.699830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.699862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.700083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.700117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.700301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.700333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.700585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.700621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.700809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.700841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.701051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.701085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.624 [2024-11-20 12:37:36.701307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.624 [2024-11-20 12:37:36.701339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.624 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.701650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.701991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.702025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.702183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.702214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.702395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.702428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.702681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.702714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.702999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.703034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.703304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.703599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.703631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.703901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.703934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.704154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.704186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.704388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.704421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.922 [2024-11-20 12:37:36.704558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.922 [2024-11-20 12:37:36.704590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.922 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.704838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.704871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.705122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.705156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.705341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.705375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.705515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.705548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.705825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.705857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.706122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.706157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.706373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.706405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.706535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.706567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.706768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.706808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.707012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.707044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.707238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.707270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.707453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.707484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.707774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.707807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.708021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.708055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.708296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.708328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.708537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.708570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.708697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.708729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.708985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.709018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.709297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.709329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.709485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.709517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.709720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.709753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.709995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.710030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.710200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.710232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.710499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.710531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.710850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.710882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.711234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.711519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.711551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.711817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.711850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.712060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.712095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.712348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.712380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.712670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.712702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.923 [2024-11-20 12:37:36.713007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.923 [2024-11-20 12:37:36.713044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.923 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.713251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.713283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.713501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.713533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.713725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.713757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.714036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.714070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.714283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.714316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.714517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.714549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.714842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.714875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.715152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.715187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.715404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.715439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.715721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.715753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.715967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.716003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.716212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.716243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.716457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.716489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.716832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.716865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.717128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.717162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.717368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.717400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.717696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.717735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.717922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.717964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.718290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.718324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.718508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.718539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.718795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.718826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.719025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.719058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.719286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.719318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.719514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.719546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.719729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.719761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.720034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.720067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.720268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.720300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.720431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.720462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.720737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.720768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.720999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.721033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.721319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.721351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.721619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.721650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.721963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.924 [2024-11-20 12:37:36.721996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.924 qpair failed and we were unable to recover it. 00:27:53.924 [2024-11-20 12:37:36.722119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.722151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.722355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.722386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.722592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.722624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.722900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.722932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.723086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.723118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.723323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.723356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.723573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.723605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.723870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.723902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.724227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.724262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.724471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.724504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.724807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.724838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.725109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.725142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.725331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.725363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.725586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.725617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.725845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.725877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.726086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.726121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.726265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.726297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.726526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.726557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.726836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.726868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.727074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.727106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.727300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.727332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.727612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.727647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.727929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.727971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.728114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.728151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.728449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.728484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.728701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.728735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.729005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.925 [2024-11-20 12:37:36.729043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.925 qpair failed and we were unable to recover it. 00:27:53.925 [2024-11-20 12:37:36.729304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.729337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.729562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.729595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.729835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.729866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.730079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.730112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.730293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.730324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.730456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.730487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.730704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.730735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.730919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.730963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.731246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.731278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.731556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.731588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.731705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.731939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.731982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.732121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.732154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.732426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.732457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.732755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.732786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.732974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.733008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.733289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.733320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.733524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.733556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.733807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.733839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.734059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.734091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.734294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.734326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.734544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.734577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.734850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.734881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.735192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.735228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.735483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.735515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.735825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.735856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.736128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.736161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.736429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.736461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.736701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.736732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.736991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.737024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.737299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.737331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.926 qpair failed and we were unable to recover it. 00:27:53.926 [2024-11-20 12:37:36.737563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.926 [2024-11-20 12:37:36.737594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.737784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.737815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.738073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.738107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.738364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.738397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.738547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.738579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.738780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.738822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.739087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.739121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.739389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.739422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.739682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.739713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.740014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.740047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.740362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.740394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.740559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.740590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.740860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.740892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.741014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.741047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.741271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.741304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.741460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.741490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.741789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.741820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.742085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.742118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.742315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.742346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.742520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.742552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.742768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.742800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.743019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.743054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.743264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.743295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.743501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.743533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.743880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.743914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.744194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.744228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.744426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.744459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.744674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.744707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.744889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.744922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.927 qpair failed and we were unable to recover it. 00:27:53.927 [2024-11-20 12:37:36.745116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.927 [2024-11-20 12:37:36.745148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.745294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.745329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.745523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.745556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.745773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.745806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.746078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.746113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.746398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.746429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.746565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.746597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.746806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.746838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.747051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.747085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.747268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.747300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.747602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.747633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.747916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.747975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.748264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.748296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.748544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.748576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.748781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.748813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.749009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.749043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.749233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.749271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.749547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.749578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.749837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.749869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.750074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.750109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.750316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.750347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.750528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.750559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.750858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.750890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.751108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.751142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.751366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.751398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.751657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.751688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.751969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.752222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.752254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.752449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.752480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.928 [2024-11-20 12:37:36.752769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.928 [2024-11-20 12:37:36.752800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.928 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.752929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.752974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.753227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.753259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.753550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.753582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.753807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.753839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.754117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.754150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.754346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.754379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.754521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.754552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.754771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.754802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.754985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.755019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.755222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.755253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.755455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.755486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.755784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.755815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.756086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.756118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.756355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.756387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.756686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.756718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.757024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.757057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.757313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.757344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.757493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.757524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.757712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.757744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.757945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.757991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.758202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.758234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.758511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.758542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.758822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.758854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.759073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.759106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.759309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.759341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.759539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.759571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.759842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.759879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.760085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.760119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.760265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.760296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.760510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.929 [2024-11-20 12:37:36.760541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.929 qpair failed and we were unable to recover it. 00:27:53.929 [2024-11-20 12:37:36.760823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.760855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.761151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.761185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.761455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.761487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.761689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.761720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.761985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.762160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.762192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.762420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.762452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.762758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.762790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.763077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.763110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.763261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.763293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.763445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.763476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.763703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.763734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.763975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.764179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.764408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.764440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.764769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.764800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.765075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.765109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.765317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.765347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.765529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.765560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.765766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.765797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.766072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.766104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.766308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.766339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.766524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.766834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.766867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.767153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.767187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.767406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.767438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.930 qpair failed and we were unable to recover it. 00:27:53.930 [2024-11-20 12:37:36.767729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.930 [2024-11-20 12:37:36.767761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.768040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.768072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.768272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.768303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.768598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.768631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.768902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.768933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.769103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.769135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.769394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.769426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.769648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.769679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.769881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.769913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.770157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.770190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.770391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.770429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.770737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.770769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.771047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.771080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.771303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.771335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.771592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.771624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.771766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.771797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.772055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.772088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.772285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.772318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.772489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.772522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.772817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.772848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.773067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.773101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.773242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.773273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.773396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.773428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.773628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.773660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.773891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.773923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.774148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.774182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.774313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.774345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.774479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.774510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.774732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.774764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.774967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.775001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.775143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.931 [2024-11-20 12:37:36.775176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.931 qpair failed and we were unable to recover it. 00:27:53.931 [2024-11-20 12:37:36.775337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.775369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.775642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.775946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.775994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.776179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.776211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.776468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.776606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.776638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.776860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.776894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.777078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.777111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.777239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.777271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.777416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.777447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.777580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.777611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.777746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.777777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.777907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.777938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.778122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.778155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.778283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.778315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.778606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.778638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.778889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.778921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.779098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.779130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.779337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.779368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.779592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.779630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.779826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.779857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.780001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.780035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.780225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.780256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.780448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.780480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.780798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.780830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.781061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.781094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.781291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.781323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.781506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.781537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.781748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.781778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.782071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.782105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.782245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.782275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.782495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-20 12:37:36.782527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-20 12:37:36.782747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.782778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.782992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.783026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.783301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.783333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.783592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.783624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.783775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.783808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.784074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.784109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.784335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.784369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.784675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.784708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.784850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.784883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.785037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.785071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.785275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.785306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.785436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.785468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.785677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.785709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.785920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.785964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.786134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.786167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.786278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.786310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.786565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.786598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.786818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.786850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.787058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.787091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.787369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.787403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.787533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.787564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.787713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.787744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.787965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.788001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.788122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.788154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.788283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.788316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.788458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.788492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.788741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.788774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.789045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.789086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.789229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.789262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.789472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.789503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.789716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.789749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.790007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.790041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.790318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-20 12:37:36.790351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-20 12:37:36.790507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.790538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.790778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.790809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.791078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.791112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.791260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.791292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.791488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.791520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.791700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.791732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.791935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.791998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.792137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.792169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.792317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.792350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.792553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.792585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.792860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.792891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.793227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.793261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.793407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.793440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.793642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.793674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.793889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.793921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.794197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.794234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.794463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.794495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.794781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.794814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.795098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.795132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.795322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.795529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.795561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.795781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.795813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.796133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.796168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.796309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.796340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.796515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.796547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.796828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.796987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.797021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.797153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.797185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.797369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.797400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.797621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.797653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.797836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.797866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.798142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.798175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.798310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-20 12:37:36.798343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-20 12:37:36.798596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.798627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.798761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.798792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.799034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.799071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.799279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.799311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.799514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.799545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.799772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.799805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.800004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.800039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.800250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.800284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.800540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.800573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.800813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.800849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.801032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.801067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.801193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.801225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.801483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.801685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.801717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.801913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.801957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.802113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.802146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.802352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.802384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.802621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.802653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.802850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.802883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.803099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.803133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.803352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.803385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.803595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.803626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.803830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.803862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.804079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.804112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.804241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.804273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.804423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.804455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.804575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.804606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.804867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.804899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.805053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.805094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.805250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.805281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.805393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.805424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.805707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.805739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.805960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.805995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.806127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.806160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.806361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-20 12:37:36.806393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-20 12:37:36.806624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.806655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.806872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.806904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.807089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.807123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.807317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.807350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.807529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.807806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.807838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.808075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.808111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.808319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.808351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.808558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.808589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.808837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.808871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.809075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.809110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.809242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.809274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.809479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.809513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.809851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.809885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.810019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.810052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.810312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.810345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.810467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.810499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.810683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.810716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.810919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.810960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.811114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.811149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.811291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.811323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.811471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.811503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.811647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.811680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.811942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.812005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.812203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.812236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.812372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.812404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.812650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.812684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.812974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.813009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-20 12:37:36.813168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-20 12:37:36.813202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.813350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.813383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.813654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.813686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.814034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.814072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.814289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.814322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.814525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.814566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.814853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.814887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.815057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.815091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.815291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.815324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.815525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.815560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.815841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.815873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.816093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.816127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.816394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.816428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.816544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.816577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.816759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.816790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.817002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.817035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.817237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.817269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.817463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.817495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.817706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.817739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.817924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.817968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.818178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.818209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.818359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.818392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.818618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.818650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.818841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.818873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.819025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.819061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.819216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.819248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.819435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.819468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.819688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.819720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.819901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.819933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.820130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-20 12:37:36.820163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-20 12:37:36.820374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.820408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.820538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.820569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.820700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.820731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.820994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.821028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.821305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.821338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.821589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.821622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.821811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.821843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.821996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.822032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.822176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.822207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.822425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.822457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.822683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.822717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.822992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.823027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.823226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.823258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.823371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.823403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.823749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.823781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.823933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.823983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.824126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.824158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.824510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.824544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.824746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.824780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.825084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.825118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.825321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.825354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.825502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.825535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.825719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.825752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.825970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.826003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.826211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.826244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.826429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.826460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.826725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.826758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.826938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.826988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.827144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.827177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.827316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.827349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.827535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.827566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-20 12:37:36.827822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-20 12:37:36.827853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.828108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.828142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.828276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.828308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.828501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.828533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.828851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.828884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.829118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.829153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.829357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.829389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.829712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.829746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.829873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.829907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.830114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.830149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.830396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.830429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.830562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.830596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.830892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.830926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.831139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.831173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.831322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.831355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.831571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.831604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.831802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.831835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.832055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.832090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.832283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.832317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.832523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.832556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.832811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.832844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.832994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.833029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.833150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.833182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.833440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.833474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.833696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.833737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.833942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.833987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.834198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.834230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.834440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.834473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.834606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.834639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.834898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.834931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.835080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.835113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.835350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.835382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-20 12:37:36.835677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-20 12:37:36.835712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.835845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.835878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.836095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.836132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.836350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.836383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.836577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.836612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.836800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.836833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.837052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.837088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.837232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.837265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.837449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.837482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.837688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.837721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.837928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.837972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.838127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.838161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.838389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.838421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.838726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.838761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.839044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.839079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.839287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.839320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.839508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.839540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.839779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.839812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.840077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.840111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.840303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.840336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.840528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.840561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.840740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.840775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.841034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.841067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.841328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.841363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.841543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.841572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.841815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.841847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.842069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.842106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.842308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.842342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.842469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.842501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.842784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.842815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.843036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.843071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.843233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-20 12:37:36.843265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-20 12:37:36.843448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.843486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.843614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.843647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.843843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.843876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.844091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.844126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.844336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.844368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.844506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.844537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.844761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.844793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.844995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.845029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.845232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.845264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.845486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.845520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.845652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.845684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.845860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.845892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.846101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.846137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.846343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.846375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.846597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.846631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.846908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.846942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.847107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.847141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.847393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.847428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.847626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.847660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.847790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.847822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.848075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.848110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.848269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.848301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.848454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.848486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.848743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.848777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.848971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-20 12:37:36.849007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-20 12:37:36.849190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.849222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.849416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.849449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.849653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.849687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.849810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.849843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.849970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.850003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.850135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.850167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.850370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.850402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.850541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.850575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.850784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.850819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.851019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.851054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.851263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.851423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.851455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.851643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.851677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.851811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.851843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.852001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.852035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.852178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.852220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.852419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.852453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.852590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.852622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.852768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.852803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.852973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.853007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.853247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.853281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.853501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.855228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.855290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.855535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.855576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.855702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.855734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.855875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.855908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.856083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.856120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.856277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-20 12:37:36.856311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-20 12:37:36.856471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.856505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.856663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.856695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.856900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.856931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.857145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.857178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.857317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.857350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.857490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.857704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.857738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.857940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.857989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.858182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.858215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.858351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.858384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.858635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.858668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.858866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.858897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.859048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.859229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.859264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.859396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.859428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.859721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.859754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.859932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.860001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.860204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.860238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.860448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.860480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.860697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.860730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.860987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.861020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.861209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.861241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.861384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.861415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.861650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.861681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.861875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.861908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.862063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.862098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.862238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.862271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.862418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.862457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.862696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.862734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.862937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.862981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.863175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.863207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.863489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.863523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.863808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.863842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-20 12:37:36.864053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-20 12:37:36.864087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.864336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.864371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.864562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.864594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.864815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.864848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.865050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.865086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.865237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.865270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.865498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.865532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.865852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.865886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.866116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.866151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.866412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.866443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.866664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.866695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.866970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.867003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.867214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.867247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.867458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.867492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.867804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.867836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.868012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.868047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.868343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.868377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.868547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.868579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.868790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.868822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.869086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.869123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.869262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.869295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.869495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65af0 is same with the state(6) to be set 00:27:53.944 [2024-11-20 12:37:36.869822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.869902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.870185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.870235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.870473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.870509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.870708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.870743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.870878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.870915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.871183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.871217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.871419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.871451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.871670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.871702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.871814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.871845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.872057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.872090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.872346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.872379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.872599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-20 12:37:36.872633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-20 12:37:36.872889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.873164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.873198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.873349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.873384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.873693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.873724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.873957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.874214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.874246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.874392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.874424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.874569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.874601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.874857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.874890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.875065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.875098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.875370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.875403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.875616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.875649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.875802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.875834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.876098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.876131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.876247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.876287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.876426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.876459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.876757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.876789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.877002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.877037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.877248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.877282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.877534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.877568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.877693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.877724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.877926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.877968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.878167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.878200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.878405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.878437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.878725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.878756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.878958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.878992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.879121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.879319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.879351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.879587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.879620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.879836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.879872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.880102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.880137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.880341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-20 12:37:36.880376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-20 12:37:36.880514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.880546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.880695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.880729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.880862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.880894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.881044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.881076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.881278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.881310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.881449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.881479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.881688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.881721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.881920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.881965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.882095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.882126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.882263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.882296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.882448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.882480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.882601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.882636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.882933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.882975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.883114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.883147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.883343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.883375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.883568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.883600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.883727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.883760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.883966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.884000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.884139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.884172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.884299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.884332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.884448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.884481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.884623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.884654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.884781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.884820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.885014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.885055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.885197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.885230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.885438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.885470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.885588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.885620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.885742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.885774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.885967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.886001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.886119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-20 12:37:36.886151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-20 12:37:36.886271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.886303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.886440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.886473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.886655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.886687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.886805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.886837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.886963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.886996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.887199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.887231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.887370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.887641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.887857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.887892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.888112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.888146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.888453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.888485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.888670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.888703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.888851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.888882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.889028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.889062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.889198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.889230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.889357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.889388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.889499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.889532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.889683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.889716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.889843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.889875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.890096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.890130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.890251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.890284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.890425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.890456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.890754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.890785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-20 12:37:36.890975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-20 12:37:36.891009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.891143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.891174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.891407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.891439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.891710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.891744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.891959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.891993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.892217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.892256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.892383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.892414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.892617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.892651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.892856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.892888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.893068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.893107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.893266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.893301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.893448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.893480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.893741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.893772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.894035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.894070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.894393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.894428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.894724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.894756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.894884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.894916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.895056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.895089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.895242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.895274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.895554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.895585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.895719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.895751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.895946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.895990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.896104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.896139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.896372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.896404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.896682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.896716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.896847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.896878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.897101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.897135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.897398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.897430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.897746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.897777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.897974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.898008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.898227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.898262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-20 12:37:36.898408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-20 12:37:36.898440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.898742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.898777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.899080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.899116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.899271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.899303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.899521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.899554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.899840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.899872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.900158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.900191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.900345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.900376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.900651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.900685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.900968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.901002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.901188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.901421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.901644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.901675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.901872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.901903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.902196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.902228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.902382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.902412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.902578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.902609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.902821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.902851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.903102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.903144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.903348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.903382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.903574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.903606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.903804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.903836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.904092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.904126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.904287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.904320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.904527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.904559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.904684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.904720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.905010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.905145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.905179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.905361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.905393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.905623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.905655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.905867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.905900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.906114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.906148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.906351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-20 12:37:36.906384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-20 12:37:36.906654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.906687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.906847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.906878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.907067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.907099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.907228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.907260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.907445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.907475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.907835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.908048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.908082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.908287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.908321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.908632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.908666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.908861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.908893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.909087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.909247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.909278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.909508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.909542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.909749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.909781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.909974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.910007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.910209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.910242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.910426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.910456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.910614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.910647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.910916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.910959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.911191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.911222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.911421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.911454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.911663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.911695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.911956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.911989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.912126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.912157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.912297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.912328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.912462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.912507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.912718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.912750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.913004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.913037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.913176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.913206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.913382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.913415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.913553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.913585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.913808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.914019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.914054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.914251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.914285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-20 12:37:36.914417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-20 12:37:36.914450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.914652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.914684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.914873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.914903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.915143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.915178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.915321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.915351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.915682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.915715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.915838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.915869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.916003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.916036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.916196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.916231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.916433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.916465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.916598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.916628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.916828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.916860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.917070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.917103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.917285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.917318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.917468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.917501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.917635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.917919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.917958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.918194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.918467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.918502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.918716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.918748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.919005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.919039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.919239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.919271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.919498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.919530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.919818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.919851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.920056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.920091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.920237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.920270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.920480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.920514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.920785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.920818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.921083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.921116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.921315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.921478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.921512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.921723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-20 12:37:36.921763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-20 12:37:36.921973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.922006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.922204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.922236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.922393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.922424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.922690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.922722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.922859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.922891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.923094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.923128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.923282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.923313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.923463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.923494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.923704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.923737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.923868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.923901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.924135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.924167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.924442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.924475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.924669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.924700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.924973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.925007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.925155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.925186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.925461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.925496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.925620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.925652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.925847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.925880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.926186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.926219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.926382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.926416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.926561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.926592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.926715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.926748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.926933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.926975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.927230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.927264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.927391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.927421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.927653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-20 12:37:36.927683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-20 12:37:36.927904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.927936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.928163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.928194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.928340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.928375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.928655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.928688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.928941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.929005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.929266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.929298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.929435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.929469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.929700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.929732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.929971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.930006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.930136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.930168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.930318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.930477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.930509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.930731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.930763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.930961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.931011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.931217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.931250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.931468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.931687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.931719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.931975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.932010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.932264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.932295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.932453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.932487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.932641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.932672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.932939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.932981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.933258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.933289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.933493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.933527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.933785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.933816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.934018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.934052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.934263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.934295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.934551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.934583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.934903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.934934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.935103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.935136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.935313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.935344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.935688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-20 12:37:36.935723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-20 12:37:36.935944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.935986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.936185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.936220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.936450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.936482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.936609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.936641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.936916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.937182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.937214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.937349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.937381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.937591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.937622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.937926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.937972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.938186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.938218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.938420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.938735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.938768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.938981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.939016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.939232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.939374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.939406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.939700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.939732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.939998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.940030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.940174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.940204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.940406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.940439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.940668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.940699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.940893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.941127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.941164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.941385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.941419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.941662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.941900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.941932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.942078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.942112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.942338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.942370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.942494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.942525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.942811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.942844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.942972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.943003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.943204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.943235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.943444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-20 12:37:36.943478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-20 12:37:36.943766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.943798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.943985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.944020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.944137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.944169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.944314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.944345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.944507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.944538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.944730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.944761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.944894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.944926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.945151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.945186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.945370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.945401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.945616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.945650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.945999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.946031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.946178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.946210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.946342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.946374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.946522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.946554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.946851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.946883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.947108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.947140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.947351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.947384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.947695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.947727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.947973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.948007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.948170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.948203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.948397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.948430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.948671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.948702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.948886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.948916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.949096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.949129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.949347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.949378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.949524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.949556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.949836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.949869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.950099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.950132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.950263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.950295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.950427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-20 12:37:36.950467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-20 12:37:36.950744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.951023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.951233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.951266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.951416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.951451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.951711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.951745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.951886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.951917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.952111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.952142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.952456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.952489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.952787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.952819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.953002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.953035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.953188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.953222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.953383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.953416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.953717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.953749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.953962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.953995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.954298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.954331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.954534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.954567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.954764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.954796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.954989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.955025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.955171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.955201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.955407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.955438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.955665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.955697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.955907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.955939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.956115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.956145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.956342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.956373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.956578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.956611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.956789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.956821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.956982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.957015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.957156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.957187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.957388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.957421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.957752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.957785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.958007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.958041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.958199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-20 12:37:36.958231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-20 12:37:36.958417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.958449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.958685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.958925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.958967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.959252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.959284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.959565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.959596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.959884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.959916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.960136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.960169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.960299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.960341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.960506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.960538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.960720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.960752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.961009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.961043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.961271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.961303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.961506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.961537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.961740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.961772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.962022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.962190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.962222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.962421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.962455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.962748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.962781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.963090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.963124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.963382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.963414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.963643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.963675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.963808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.963841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.964086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.964122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.964270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.964300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.964531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.964564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.964819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.964851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.965119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.965155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.965405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.965439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.965672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.965707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.965918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.965961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.966098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-20 12:37:36.966130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-20 12:37:36.966320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.966353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.966561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.966594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.966751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.966785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.966977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.967010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.967207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.967239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.967493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.967525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.967802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.967835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.968099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.968131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.968330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.968362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.968491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.968524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.968754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.968785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.968938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.968979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.969194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.969484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.969517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.969813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.969845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.969992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.970027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.970181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.970220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.970352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.970384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.970605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.970638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.970839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.970871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.971120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.971153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.971405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.971437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.971714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.971746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.971938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.971979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-20 12:37:36.972115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-20 12:37:36.972147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.972356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.972390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.972577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.972611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.972866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.972898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.973065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.973098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.973305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.973346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.973512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.973552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.973768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.973810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.974103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.974160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.974388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.974464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.974630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.974668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.974856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.974889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.975023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.975058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.975321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.975354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.975541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.975575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.975687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.975720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.975828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.975860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.976082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.976118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.976374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.976409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.976607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.976654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.976914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.976955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.977144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.977177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.977361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.977393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.977695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.977730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.977927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.977974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.978239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.978271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.978476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.978507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.978638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.978671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.978878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.978912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.979138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.979174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.979464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.979498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.979718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.979751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.980023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.980058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.980270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.980305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.980438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-20 12:37:36.980470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-20 12:37:36.980674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.980707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.980983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.981017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.981201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.981235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.981511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.981544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.981751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.981785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.982100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.982134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.982355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.982388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.982608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.982640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.982917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.982962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.983167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.983199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.983363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.983398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.983640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.983679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.983916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.983959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.984284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.984318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.984523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.984555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.984707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.984740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.985026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.985061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.985364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.985396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.985597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.985630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.985776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.985809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.986055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.986093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.986248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.986281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.986510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.986545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.986820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.986853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.987089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.987124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.987286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.987319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.987453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.987484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.987611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.987644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.987921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.987964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.988085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.988119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.988244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.988276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.988480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.988513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-20 12:37:36.988788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-20 12:37:36.988819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.989098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.989132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.989336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.989368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.989565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.989599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.989828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.989861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.990061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.990095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.990310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.990341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.990599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.990632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.990836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.990869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.991073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.991106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.991246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.991279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.991474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.991508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.991814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.991846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.992100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.992137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.992361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.992392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.992670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.992704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.992965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.992999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.993211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.993244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.993442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.993477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.993671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.993705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.993970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.994005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.994226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.994260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.994459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.994512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.994735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.994774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.995084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.995119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.995378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.995425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.995642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.995678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.995965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.996001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.996190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.996225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.996460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.996493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.996711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-20 12:37:36.996746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-20 12:37:36.996963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.997004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.997269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.997308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.997474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.997508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.997714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.997747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.997945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.998020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.998248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.998482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.998517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.998812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.998845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.999119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.999158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.999316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.999363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.999511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.999553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.999810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:36.999842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-20 12:37:36.999991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-20 12:37:37.000028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.000310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.000344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.000487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.000519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.000779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.000814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.001040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.001085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.001244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.001277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.001459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.001491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.001706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.001973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.002010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.002298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.002347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.002546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.002593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.002904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.002993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.003248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.003299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.003534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.003584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.003879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.003930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.004213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.004264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.004445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.004488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.004739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.004788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.005125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.005180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.005363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.005408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.005622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.005671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.006004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.006056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.006206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.006244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.006512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.006752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.006785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.007086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.007122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.007381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.007414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.007705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.007739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.007871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.007905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.008224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.008260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.008465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.008501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.008646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.008913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.008961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.009168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.009201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.009343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.009378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.009560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.009593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.009809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-20 12:37:37.009843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-20 12:37:37.010044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.010082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.010217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.010251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.010444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.010477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.010606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.010638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.010908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.010943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.011138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.011171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.011393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.011428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.011566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.011598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.011829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.011996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.012033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.012196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.012230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.012427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.012460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.012604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.012639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.012754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.012789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.012928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.012972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.013168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.013202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.013480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.013513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.013772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.013806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.014004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.014040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.014164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.014198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.014403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.014437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.014632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.014666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.014871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.014903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.015143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.015177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.015292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.015326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.015523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.015555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.015757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.015789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.016013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.016049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.016186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.016219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.016361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.016394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.016582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.016614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.016754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.016788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.016924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.016969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.017308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.017341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.017478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.017512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.017719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.017753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.017893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.017928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-20 12:37:37.018132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-20 12:37:37.018166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.018475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.018619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.018654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.018932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.018978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.019116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.019149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.019277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.019309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.019498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.019531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.019733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.019766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.019882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.019916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.020064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.020101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.020294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.020326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.020528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.020562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.020802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.020835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.020971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.021007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.021213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.021245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.021372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.021605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.021637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.021849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.021882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.022014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.022049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.022187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.022219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.022361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.022395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.022620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.022656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.022899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.022931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.023099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.023284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.023319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.023442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.023487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.023794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.023829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.024021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.024056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.024193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.024226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.024360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.024392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.024577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.024609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.024918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.024961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.025238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.025271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.025468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.025500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.025699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.025732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.025842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.025875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.026150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.026185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.026369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.026402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-20 12:37:37.026619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-20 12:37:37.026652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.026884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.026916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.027140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.027174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.027383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.027416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.027694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.027727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.027865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.027898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.028123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.028158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.028283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.028317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.028524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.028556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.028767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.028799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.029079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.029114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.029313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.029346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.029490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.029523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.029824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.029856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.030133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.030174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.030413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.030447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.030661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.030694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.031009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.031265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.031298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.031508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.031540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.031672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.031705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.031894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.031928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.032143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.032176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.032421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.032455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.032782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.032816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.033021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.033056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.033262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.033296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.033476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.033509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.033817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.033852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.034095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.034130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.034341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.034374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.034563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.034597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.034758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.034791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.035007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.035043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.035294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.035327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.035664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.035696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.035962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.035998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.036147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.036180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-20 12:37:37.036335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-20 12:37:37.036367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.036501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.036533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.036839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.036875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.037133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.037172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.037429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.037464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.037679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.037712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.037973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.038007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.038206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.038240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.038380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.038413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.038703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.038737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.038966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.039002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.039207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.039241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.039428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.039463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.039767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.039803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.040066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.040101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.040358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.040392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.040660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.040693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.040961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.040995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.041150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.041183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.041440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.041474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.041770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.041802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.041944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.042005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.042214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.042246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.042400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.042432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.042570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.042603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.042857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.042889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.043171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.043205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.043395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.043428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.043733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.043765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.043894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-20 12:37:37.043927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-20 12:37:37.044148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.044180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.044328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.044361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.044565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.044597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.044809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.044841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.045041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.045076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.045355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.045388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.045672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.045705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.045916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.045957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.046144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.046178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.046438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.046471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.046767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.046800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.047072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.047106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.047336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.047367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.047623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.047655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.047856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.047889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.048116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.048150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.048397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.048429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.048723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.048756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.049021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.049055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.049203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.049236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.049439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.049471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.049791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.049822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.050020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.050053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.050256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.050289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.050544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.050577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.050784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.050815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.051100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.051134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.051357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.051390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.051607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.051640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.051857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.051890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.052041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.052076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.052278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.052310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.052641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.052673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.052874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.052907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.053096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.053129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.053287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.053320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.053589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.053621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.053760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.053793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-20 12:37:37.054044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-20 12:37:37.054079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.054293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.054325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.054521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.054554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.054836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.054874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.055081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.055115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.055368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.055400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.055555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.055832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.055864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.056119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.056154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.056380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.056413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.056671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.056704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.056902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.056934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.057098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.057132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.057267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.057299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.057576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.057608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.057863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.057896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.058165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.058199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.058409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.058442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.058664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.058696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.058983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.059018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.059150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.059181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.059393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.059425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.059628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.059661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.059802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.059833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.060062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.060095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.060293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.060326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.060617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.060649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.060901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.060933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.061151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.061183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.061342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.061374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.061652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.061691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.061978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.062013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.062286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.062319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.062520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.062552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.062827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.062861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.063143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.063178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.063319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.063352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.063608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.063640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.063895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-20 12:37:37.063928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-20 12:37:37.064090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.064122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.064333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.064366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.064504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.064536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.064812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.064844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.065121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.065155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.065315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.065348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.065638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.065902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.065934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.066224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.066258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.066502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.066535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.066726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.066758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.066989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.067024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.067172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.067204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.067330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.067362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.067553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.067586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.067874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.067907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.068098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.068132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.068286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.068319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.068600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.068633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.068894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.068927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.069132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.069164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.069438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.069470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.069758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.069790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.070081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.070116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.070321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.070352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.070570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.070602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.070800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.070832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.071086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.071120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.071319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.071351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.071634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.071666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.071942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.071984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.072135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.072167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.072368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.072401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.072693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.072725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.073003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.073037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.073252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.073284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.073488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.073520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-20 12:37:37.073738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-20 12:37:37.073771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.073974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.074007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.074222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.074254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.074457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.074489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.074761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.074793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.075044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.075078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.075231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.075263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.075484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.075516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.075785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.075817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.076020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.076054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.076302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.076335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.076537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.076569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.076822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.077057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.077183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.077215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.077349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.077381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.077590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.077622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.077811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.077842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.078107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.078140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.078395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.078427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.078576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.078608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.078744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.078774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.079054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.079101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.079318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.079351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.079533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.079565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.079709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.079741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.079959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.079998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.080276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.080312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.080498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.080531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.080814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.080860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.081055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.081090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.081238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.081272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.081499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.081531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.081807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.081840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.082147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.082180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.082387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.082420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.082738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.082771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.083025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.083059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-20 12:37:37.083213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-20 12:37:37.083245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.083449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.083632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.083664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.083793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.083824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.084074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.084107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.084244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.084408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.084440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.084695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.084728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.085005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.085039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.085222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.085255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.085488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.085522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.085740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.086069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.086105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.086291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.086324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.086489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.086522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.086744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.086776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.087040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.087075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.087279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.087312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.087610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.087924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.087987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.088192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.088225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.088420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.088453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.088697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.088731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.088985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.089020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.089225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.089258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.089423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.089457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.089738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.089771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.090003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.090037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.090340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.090373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.090498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.090530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.090676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.090709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.090982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.091017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.091241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.091274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.091457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.091490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-20 12:37:37.091691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-20 12:37:37.091724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.092043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.092078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.092237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.092273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.092418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.092451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.092721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.092915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.092958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.093216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.093249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.093444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.093476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.093759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.093792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.094010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.094045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.094263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.094296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.094496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.094529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.094803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.094837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.094974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.095009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.095241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.095274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.095528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.095562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.095873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.095906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.096159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.096193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.096396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.096430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.096616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.096649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.096829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.096862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.097077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.097113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.097305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.097339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.097542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.097575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.097845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.097877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.098196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.098230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.098488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.098522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.098866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.098899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.099162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.099197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.099425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.099459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.099779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.099811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.100033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.100068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.100278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.100311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.100501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.100533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.100731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.100763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.101021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.101056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-20 12:37:37.101213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-20 12:37:37.101247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.101448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.101480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.101685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.101718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.101918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.101960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.102148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.102181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.102408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.102441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.102662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.102694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.102875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.102908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.103072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.103105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.103330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.103364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.103574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.103607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.103830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.104089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.104124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.104325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.104359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.104512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.104545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.104830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.104863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.105043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.105078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.105283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.105317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.105541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.105574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.105775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.105808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.106070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.106104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.106246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.106279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.106476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.106509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.106648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.106681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.106978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.107013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.107292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.107326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.107477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.107510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.107764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.107797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.107997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.108032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.108227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.108260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.108480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.108512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-20 12:37:37.108805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-20 12:37:37.108837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.109073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.109108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.109362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.109395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.109580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.109613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.109874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.109906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.110123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.110163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.110319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.110551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.110583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.110787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.110820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.110972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.111006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.111269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.111303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.111588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.111622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.111929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.111974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.112184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.112216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.112443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.112475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.112690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.112723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.112919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.112962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.113099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.113132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.113323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.113356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.113546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.113577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.113836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.113869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.114128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.114163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.114316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.114348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.114553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.114586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.114811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.114844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.115045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.115224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.115257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.115478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.115510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.115749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.115781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.116067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.116101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.116238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.116270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.116472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.116504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.116731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.116770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.117052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.117087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.117282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.117315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.117511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.117543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.117817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.117850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.118109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.118144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-20 12:37:37.118285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-20 12:37:37.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.118573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.118605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.118856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.118890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.119145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.119179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.119368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.119400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.119596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.119628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.119905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.119937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.120172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.120205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.120324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.120357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.120511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.120544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.120803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.120836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.121069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.121256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.121289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.121430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.121462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.121710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.121742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.122079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.122114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.122307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.122339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.122536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.122569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.122780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.122814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.123062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.123095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.123246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.123279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.123424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.123456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.123684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.123715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.124011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.124044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.124232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.124265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.124521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.124553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.124832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.124863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.125088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.125122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.125311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.125343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.125492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.125524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.125734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-20 12:37:37.125766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-20 12:37:37.125973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.126008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.126158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.126190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.126325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.126514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.126547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.126751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.126829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.127072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.127112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.127321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.127353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.127499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.127531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.127727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.127758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.128027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.128061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.128281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.128313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.128445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.128476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.128746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.128777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.129008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.129141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.129173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.129356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.129523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.129555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.129684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.129727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.129982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.130015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.130268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.130300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.130501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.130532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.130802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.130833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.131075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.131109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.131318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.131349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.131583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.131614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.131874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.131904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.132067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.132101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.132249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.132280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.132436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-20 12:37:37.132466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-20 12:37:37.132743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.132774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.132915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.132955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.133165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.133198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.133351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.133382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.133654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.133686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.133940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.133981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.134186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.134218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.134486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.134517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.134740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.134772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.134973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.135007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.135155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.135187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.135414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.135445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.135677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.135709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.135968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.136001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.136223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.136256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.136472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.136505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.136797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.136828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.137054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.137088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.137222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.137254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.137518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.137549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.137699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.137731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.137925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.137967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.138115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.138147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.138352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.138383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.138592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.138625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.138823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.138855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.139121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.139155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.139359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.139390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.139599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.139637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-20 12:37:37.139840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-20 12:37:37.139870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.140001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.140034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.140195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.140227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.140375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.140406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.140608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.140640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.140787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.140818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.141081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.141114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.141306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.141338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.141491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.141523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.141816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.141847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.142056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.142092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.142247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.142280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.142586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.142617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.142840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.142873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.143141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.143176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.143325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.143357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.143563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.143595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.143862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.143894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.144118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.144151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.144351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.144383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.144657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.144847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.145025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.145059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.145257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.145289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.145471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.145504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.145770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.145802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.146007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.146041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.146270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.146302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.146449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.146481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.146776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.146808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.147016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.147049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-20 12:37:37.147305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-20 12:37:37.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.147618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.147651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.147862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.147894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.148176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.148210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.148418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.148450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.148686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.148718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.148997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.149030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.149308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.149341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.149503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.149542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.149687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.149720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.149901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.149933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.150202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.150235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.150388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.150419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.150671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.150703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.150897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.150929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.151146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.151179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.151370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.151402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.151544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.151576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.151828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.151861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.152071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.152104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.152296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.152328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.152562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.152595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.152880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.152912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.153199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.153233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.153459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.153492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.153683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.153715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.153915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.153973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.154115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.154147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.154297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.154330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.154463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.154494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.154807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.154839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.155035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.155070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.155277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.155308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.155442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.155474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.155779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.155811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.156002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.156037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.156329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.156361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-20 12:37:37.156617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-20 12:37:37.156649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.156829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.156862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.157070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.157104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.157366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.157398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.157593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.157625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.157838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.157871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.158060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.158093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.158227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.158259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.158460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.158493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.158730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.158765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.159050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.159084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.159236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.159485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.159519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.159740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.159772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.160041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.160074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.160211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.160242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.160494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.160525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.160792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.160824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.161022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.161055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.161254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.161285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.161467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.161498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.161767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.161799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.162094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.162127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.162354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.162385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.162608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.162639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.162930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.162987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.163145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.163178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.163397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.163429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.163732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.163764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.164039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.164074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.164286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.164318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.164545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.164577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.164729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.164760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-20 12:37:37.165012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-20 12:37:37.165045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.165196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.165228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.165424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.165457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.165781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.165812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.166062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.166096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.166301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.166340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.166545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.166577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.166874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.166906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.167131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.167164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.167369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.167401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.167680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.167712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.167854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.167884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.168020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.168054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.168258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.168290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.168484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.168517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.168797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.168830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.169011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.169045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.169314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.169347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.169484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.169517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.169796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.169828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.170102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.170136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.170431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.170463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.170651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.170682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-20 12:37:37.170972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-20 12:37:37.171005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.171202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.171234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.171452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.171483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.171730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.171762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.171893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.171925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.172171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.172204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.172502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.172534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.172803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.172835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.173098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.173131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.173394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.173427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.173729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.173760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.173970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.174004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.174211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.174242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.174434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.174466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.174668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.174700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.175006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.175039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.175295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.175326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.175441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.175474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.175669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.175701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.175978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.176012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.176197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.176230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.176370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.176401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.176556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.176593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.176794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.176827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.177098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.177132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.177382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.177415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.177722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.177754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.178006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.178039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.178343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.178697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.178731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.178984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.179018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.179328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.179361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.179625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.179658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.180005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.180263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-20 12:37:37.180294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-20 12:37:37.180615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.180647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.180907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.180938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.181150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.181184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.181389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.181422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.181673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.181705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.181906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.181940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.182199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.182452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.182485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.182760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.182792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.183087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.183120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.183392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.183425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.183625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.183656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.183917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.183956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.184166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.184199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.184319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.184352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.184504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.184536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.184795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.184827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.185028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.185062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.185263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.185296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.185550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.185713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.185745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.186028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.186061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.186257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.186290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.186568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.186600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.186795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.186827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.187095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.187128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.187421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.187454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.187678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.187723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.187940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.187985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.188187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.188220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.188472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.188505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.188765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.188797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.189072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.189106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.189363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.189395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.189578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.189610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.189892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.189924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.190176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-20 12:37:37.190214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-20 12:37:37.190493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.190526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.190738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.190771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.191039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.191073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.191270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.191302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.191513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.191546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.191761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.191794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.192076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.192110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.192244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.192276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.192473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.192505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.192721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.192753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.193040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.193075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.193352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.193385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.193585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.193617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.193877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.193908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.194138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.194171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.194355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.194396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.194671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.194703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.194920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.194971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.195178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.195210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.195490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.195522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.195717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.195748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.196008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.196042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.196339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.196371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.196605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.196637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.196916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.196955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.197153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.197185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.197363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.197395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.197658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.197689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.197969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.198003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.198185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.198218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.198363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.198401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.198592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.198624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.198924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.198991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.199243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.199275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.199555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.199587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.199835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.199866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.200142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.200176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-20 12:37:37.200360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-20 12:37:37.200393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.200591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.200624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.200820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.200852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.201063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.201100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.201408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.201442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.201622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.201655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.201908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.201941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.202152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.202184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.202456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.202488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.202720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.202753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.203009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.203043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.203342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.203375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.203692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.203725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.204018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.204052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.204271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.204305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.204507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.204539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.204754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.204787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.205041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.205074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.205334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.205367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.205568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.205601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.205871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.205904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.206119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.206153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.206427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.206460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.206718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.206750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.206934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.206991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.207242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.207274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.207548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.207581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.207854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.207885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.208085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.208120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.208399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.208432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.208609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.208642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.208836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.208868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.209063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.209097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.209350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.209389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.209572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.209604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.209826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.209858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.210136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.210171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.210374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-20 12:37:37.210407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-20 12:37:37.210557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.210588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.210773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.210805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.210999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.211032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.211175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.211207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.211415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.211448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.211723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.211755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.212032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.212065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.212281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.212314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.212510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.212542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.212771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.212804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.213060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.213092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.213316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.213347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.213554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.213585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.213786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.213819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.214019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.214051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.214182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.214215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.214487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.214520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.214775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.214806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.215012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.215047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.215197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.215230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.215412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.215444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.215733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.215871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.215903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.216136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.216170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.216465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.216497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.216608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.216640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.216856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.217055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.217089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.217303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.217337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.217535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.217568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.217833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.217866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.218084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.218117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-20 12:37:37.218260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-20 12:37:37.218291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.218412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.218444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.218568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.218601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.218792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.218831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.219038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.219072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.219330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.219363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.219480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.219511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.219697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.219728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.219837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.219870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.220145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.220178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.220434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.220467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.220704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.220737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.220972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.221006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.221199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.221232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.221426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.221458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.221732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.221764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.221964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.221997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.222137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.222168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.222347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.222378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.222600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.222632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.222911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.222943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.223099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.223131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.223353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.223386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.223577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.223795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.223826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.224102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.224136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.224290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.224321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.224513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.224544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.224740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.224771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.225067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.225100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.225293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.225325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.225530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.225562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.225686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.225717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.225918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.225957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.226178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.226209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.226335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.226367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.226507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.226539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.226789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.226821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-20 12:37:37.226943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-20 12:37:37.226983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.227128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.227160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.227353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.227385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.227576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.227608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.227861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.227893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.228200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.228239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.228431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.228464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.228712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.228745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.228870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.228900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.229161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.229194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.229491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.229527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.229759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.229987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.230198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.230230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.230481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.230513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.230692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.230724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.230992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.231026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.231285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.231316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.231516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.231548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.231736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.231769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.231956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.231989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.232185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.232217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.232466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.232497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.232766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.232797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.233098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.233132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.233378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.233410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.233648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.233679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.233934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.233978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.234133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.234166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.234428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.234459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.234760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.234792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.235085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.235119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.235425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.235458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.235714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.235747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.235888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.235921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.236205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.236238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-20 12:37:37.236453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-20 12:37:37.236485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.236717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.236748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.236945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.236986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.237241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.237273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.237468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.237500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.237698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.237731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.237959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.237994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.238270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.238302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.238551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.238583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.238759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.238803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.238993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.239028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.239297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.239330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.239660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.239692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.239973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.240006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.240211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.240242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.240380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.240412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.240536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.240568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.240867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.240899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.241031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.241064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.241212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.241244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.241466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.241498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.241699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.241732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.242006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.242040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.242302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.242335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.242606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.242637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.242889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.242920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.243193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.243225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.243481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.243514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.243772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.243804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.244106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.244289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.244321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.244542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.244574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.244825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.244856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.245122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.245155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.245413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.245445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.245624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.245655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-20 12:37:37.245866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-20 12:37:37.245899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.246164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.246196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.246446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.246478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.246783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.246814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.247098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.247131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.247385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.247418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.247673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.247705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.247980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.248015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.248269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.248301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.248579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.248610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.248892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.248924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.249140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.249173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.249390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.249422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.249688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.249725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.250026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.250060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.250327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.250359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.250653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.250684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.250898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.250930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.251175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.251208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.251472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.251505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.251725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.251756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.251960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.251993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.252202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.252234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.252511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.252542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.252734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.252766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.253034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.253068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.253272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.253303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.253487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.253717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.253749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.253958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.253991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.254153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.254185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.254436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.254469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.254659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.254814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.254845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.255025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.255061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.255263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.255295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-20 12:37:37.255567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-20 12:37:37.255599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.255826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.255858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.256112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.256146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.256408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.256442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.256656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.256690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.256966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.257000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.257193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.257226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.257438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.257470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.257735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.257767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.257909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.257941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.258204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.258238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.258377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.258409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.258698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.258730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.259032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.259065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.259342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.259374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.259610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.259642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.259924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.259980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.260238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.260276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.260560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.260593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.260890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.260922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.261196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.261228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.261521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.261553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.261866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.261898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.262120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.262154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.262358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.262389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.262649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.262682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.262921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.262961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.263186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.263218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.263374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.263406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.263587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.263618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.263754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.263786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-20 12:37:37.264074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-20 12:37:37.264109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.264322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.264354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.264589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.264621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.264897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.264929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.265263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.265532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.265564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.265763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.265796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.266117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.266325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.266357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.266509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.266542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.266818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.266850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.267106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.267140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.267391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.267423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.267730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.267763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.268028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.268061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.268345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.268377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.268682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.268714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.268980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.269015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.269299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.269331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.269529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.269561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.269763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.269796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.270071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.270105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.270394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.270426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.270709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.270740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.271023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.271057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.271187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.271218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.271493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.271531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.271800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.271832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.272146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.272180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.272305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.272337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.272634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.272666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.272971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.273006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.273155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.273187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.273464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.273496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.273635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.273667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.273970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.274004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-20 12:37:37.274189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-20 12:37:37.274222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.274413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.274445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.274653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.274685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.274970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.275003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.275212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.275245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.275452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.275483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.275758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.275790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.275997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.276031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.276333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.276366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.276596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.276628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.276823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.276855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.276971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.277005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.277283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.277316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.277612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.277645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.277838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.277869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.278153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.278187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.278387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.278420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.278835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.279076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.279116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.279389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.279422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.279698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.279731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.279878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.279909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.280204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.280237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.280455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.280487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.280716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.280748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.281004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.281037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.281223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.281255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.281436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.281467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.281596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.281628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.281747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.281779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.281919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.281974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.282254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.282488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.282521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.282791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.282824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.283050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-20 12:37:37.283083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-20 12:37:37.283291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.283326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.283517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.283549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.283800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.283833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.284065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.284099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.284249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.284282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.284517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.284549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.284771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.284803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.285086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.285122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.285254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.285286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.285542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.285576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.285780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.285812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.286090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.286124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.286405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.286439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.286641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.286673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.286897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.286931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.287251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.287285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.287537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.287569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.287772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.287805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.288081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.288115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.288434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.288731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.288765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.289015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.289049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.289274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.289307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.289581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.289614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.289842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.289874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.290078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.290110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.290307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.290340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.290619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.290651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.290904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.290936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.291248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.291279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.291576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.291609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.291880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.291913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.292061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.292094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.292388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.292420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.292626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.292660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.292881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.292912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.275 qpair failed and we were unable to recover it. 00:27:54.275 [2024-11-20 12:37:37.293253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.275 [2024-11-20 12:37:37.293332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.293558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.293594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.293880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.293915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.294281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.294322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.294540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.294574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.294726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.294757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.294970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.295004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.295218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.295252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.295547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.295579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.295837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.295871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.296134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.296169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.296423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.296456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.296757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.296790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.297005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.297039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.297249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.297283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.297426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.297459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.297721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.297752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.298027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.298063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.298287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.298320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.298617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.298649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.298872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.298906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.299176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.299210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.299498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.299530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.299817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.299848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.300130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.300163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.300423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.300457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.300755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.300795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.301060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.301096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.301324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.301357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.301552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.301584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.301786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.301818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.302093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.302126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.302429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.302461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.302674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.302707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.302985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.303019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.303337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.303482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.276 [2024-11-20 12:37:37.303516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.276 qpair failed and we were unable to recover it. 00:27:54.276 [2024-11-20 12:37:37.303815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.303848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.304036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.304072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.304273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.304306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.304509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.304540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.304869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.304901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.305177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.305211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.305493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.305526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.305803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.305836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.306035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.306069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.306320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.306352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.306654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.306685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.306885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.306919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.307152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.307184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.307400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.307433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.307635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.307666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.307932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.307990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.308270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.308302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.308574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.308606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.308859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.308893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.309031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.309065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.309330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.309363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.309613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.309646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.309961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.309996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.310185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.310216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.310469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.310504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.310720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.310753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.310980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.311015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.311297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.311328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.311606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.311638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.311777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.311814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.312047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.277 [2024-11-20 12:37:37.312080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.277 qpair failed and we were unable to recover it. 00:27:54.277 [2024-11-20 12:37:37.312299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.312331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.312613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.312851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.312884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.313109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.313144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.313332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.313364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.313639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.313672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.313971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.314005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.314272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.314304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.314509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.314540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.314801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.314834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.315053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.315087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.315385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.315417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.315686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.315720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.316001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.316036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.316252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.316286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.316472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.316505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.316707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.316738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.317013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.317046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.317268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.317299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.317492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.317524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.317801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.317832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.318017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.318050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.318257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.318289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.318588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.318620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.318891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.318924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.319206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.319241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.319445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.319477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.319697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.319731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.319931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.319973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.320206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.320240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.320493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.320526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.320757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.320789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.321040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.321073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.321375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.321407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.321538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.321570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.321773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.321804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.321995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.322028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.278 [2024-11-20 12:37:37.322226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.278 [2024-11-20 12:37:37.322258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.278 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.322484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.322521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.322789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.322820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.323092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.323126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.323346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.323377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.323636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.323668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.323883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.323915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.324233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.324267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.324545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.324786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.324819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.325116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.325149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.325443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.325474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.325669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.325701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.325972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.326004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.326274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.326307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.326496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.326528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.326822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.326854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.327061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.327094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.327348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.327381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.327561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.327593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.327868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.327899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.328115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.328149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.328352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.328384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.328608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.328885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.328919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.329231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.329264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.329519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.329552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.329690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.329722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.329911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.329945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.330185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.330218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.330420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.330451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.330657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.330690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.330883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.330914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.331108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.331143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.331426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.331635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.331668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.331939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.332002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.332317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.332348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.279 [2024-11-20 12:37:37.332600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.279 [2024-11-20 12:37:37.332633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.279 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.332945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.332993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.333190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.333221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.333418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.333457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.333737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.333770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.334025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.334060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.334321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.334354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.334657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.334687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.334962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.334995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.335249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.335284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.335489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.335521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.335730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.335764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.335961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.335996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.336271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.336303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.336497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.336530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.336657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.336689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.336918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.336959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.280 [2024-11-20 12:37:37.337269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.280 [2024-11-20 12:37:37.337302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.280 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.337513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.337546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.337797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.337830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.338134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.338168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.338454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.338486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.338735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.338767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.338906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.338937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.339084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.339116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.339301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.339333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.339601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.339632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.339856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.339888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.340105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.340138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.340393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.340426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.340689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.340722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.340939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.340981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.341233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.341264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.341521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.341553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.341773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.341804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.342061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.342096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.342353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.342391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.342668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.342701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.342985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.343018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.343214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.343248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.343539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.343570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.343774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.343806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.343995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.344029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.344313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.344354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-20 12:37:37.344550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-20 12:37:37.344581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.344771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.344805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.345061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.345095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.345300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.345333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.345483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.345517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.345699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.345734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.345966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.346002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.346257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.346291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.346593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.346626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.346890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.346923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.347228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.347263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.347520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.347554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.347775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.348147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.348300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.348332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.348611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.348643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.348833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.348865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.349130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.349168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.349370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.349403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.349675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.349707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.349991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.350027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.350313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.350345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.350622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.350656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.350913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.350958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.351247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.351534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.351567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.351765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.351797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.352098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.352132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.352273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.352307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.352562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.352594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.352873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.352907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.353148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.353184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.353463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.353495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.353796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.353829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.354094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.354128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.354340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.354374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.354656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.354687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.354871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-20 12:37:37.355147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-20 12:37:37.355182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.355436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.355474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.355795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.355827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.356041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.356077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.356353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.356386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.356709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.356984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.357019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.357329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.357363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.357543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.357577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.357832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.357865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.358165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.358198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.358464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.358498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.358716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.358748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.359009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.359042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.359244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.359277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.359412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.359446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.359643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.359677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.359862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.359895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.360181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.360216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.360472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.360505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.360804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.360837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.361103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.361139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.361344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.361376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.361566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.361599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.361782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.361816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.362099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.362133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.362395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.362427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.362636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.362671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.362875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.362908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.363139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.363172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.363373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.363407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.363657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.363689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.363978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.364025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.364296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.364329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.364602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.364636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.364877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.364911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.365229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.365263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-20 12:37:37.365549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-20 12:37:37.365581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.365806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.365838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.366041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.366074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.366260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.366294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.366547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.366585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.366865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.366896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.367123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.367156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.367409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.367440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.367712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.367745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.368028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.368060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.368283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.368315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.368434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.368464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.368730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.368761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.369037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.369072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.369366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.369399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.369688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.369720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.369997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.370031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.370240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.370272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.370554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.370588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.370771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.370803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.371002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.371035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.371290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.371323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.371603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.371634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.371835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.371866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.372064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.372098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.372312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.372344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.372619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.372651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.372873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.372905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.373219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.373252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.373529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.373563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.373817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.373849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.374100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.374134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.374244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.374274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.374580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.374613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.374822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.374855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.375108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.375142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.375340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.375372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.375591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.375625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-20 12:37:37.375809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-20 12:37:37.375843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.376061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.376094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.376369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.376402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.376693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.376724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.377000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.377034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.377320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.377352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.377605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.377643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.377844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.377875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.378058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.378091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.378367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.378399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.378653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.378685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.378886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.378916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.379244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.379278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.379398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.379430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.379669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.379964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.379997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.380198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.380233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.380486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.380519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.380699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.380733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.380925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.380970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.381247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.381278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.381577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.381609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.381810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.381845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.381972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.382006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.382189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.382220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.382361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.382394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.382530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.382562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.382814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.382846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.383048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.383085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.383228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.383260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.383480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.383513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-20 12:37:37.383647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-20 12:37:37.383678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.383865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.383895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.384114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.384149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.384347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.384379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.384504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.384536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.384839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.384871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.385131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.385164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.385474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.385508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.385702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.385735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.385928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.385975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.386119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.386152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.386287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.386319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.386502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.386534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.386734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.386767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.386979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.387013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.387294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.387334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.387596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.387629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.387834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.387864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.388116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.388152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.388378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.388410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.388537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.388568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.388790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.388820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.388927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.388967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.389264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.389296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.389548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.389580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.389806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.389839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.390111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.390145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.390353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.390385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.390663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.390695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.390904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.390935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.391197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.391229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.391374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.391405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.391660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.391690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.391911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.391944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.392156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.392190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.392460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.392491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.392679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.392710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.393005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.393038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-20 12:37:37.393339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-20 12:37:37.393373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.393650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.393680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.393864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.393897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.394185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.394220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.394360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.394392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.394527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.394559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.394830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.394863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.395142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.395176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.395462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.395494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.395729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.395760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.395961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.395996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.396205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.396237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.396511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.396542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.396794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.396826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.397091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.397124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.397425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.397456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.397685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.397718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.397924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.397974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.398308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.398342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.398622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.398655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.398938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.398982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.399252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.399285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.399608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.399641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.399918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.399959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.400240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.400274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.400479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.400510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.400713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.400747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.401001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.401034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.401293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.401325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.401629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.401661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.401938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.401981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.402188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.402226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.402348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.402378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.402623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.402658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.402844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.402877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.403098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.403132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.403323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.403356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.403561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-20 12:37:37.403592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-20 12:37:37.403867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.403898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.404164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.404198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.404518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.404551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.404787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.404820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.405075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.405110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.405413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.405444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.405754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.405788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.405991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.406025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.406310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.406561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.406595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.406878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.406910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.407067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.407102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.407297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.407332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.407643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.407677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.407933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.407976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.408277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.408310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.408451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.408483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.408681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.408715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.408998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.409032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.409152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.409191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.409393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.409426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.409657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.409690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.409891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.409923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.410159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.410192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.410410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.410443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.410635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.410666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.410925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.410986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.411182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.411216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.411470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.411504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.411687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.411719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.411926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.411970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.412194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.412227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.412500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.412534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.412757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.412792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.412933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.412979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.413265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.413297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.413437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.413468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.413697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-20 12:37:37.413730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-20 12:37:37.414014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.414047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.414254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.414288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.414423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.414455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.414732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.414765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.415036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.415068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.415205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.415237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.415488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.415520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.415821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.415852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.416167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.416201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.416407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.416438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.416625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.416659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.416912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.416944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.417209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.417241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.417513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.417546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.417740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.417771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.418068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.418101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.418285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.418318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.418529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.418745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.418777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.419028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.419305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.419338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.419525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.419564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.419826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.419859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.420133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.420167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.420351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.420382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.420574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.420606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.420873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.420903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.421164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.421196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.421462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.421495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.421743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.421775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.422028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.422061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.422366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.422398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.422587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.422618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.422918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.422958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.423251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.423283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.423580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.423612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.423886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.423918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.424197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-20 12:37:37.424230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-20 12:37:37.424447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.424478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.424681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.424714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.424988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.425021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.425220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.425251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.425474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.425506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.425711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.425742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.426040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.426072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.426339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.426371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.426566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.426600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.426875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.426906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.427202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.427237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.427510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.427542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.427810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.427842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.428146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.428179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.428465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.428496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.428700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.428731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.428934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.428980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.429162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.429194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.429447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.429481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.429678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.429710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.429895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.429926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.430218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.430251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.430443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.430475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.430735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.430766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.430972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.431006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.431201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.431235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.431510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.431541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.431843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.431876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.432086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.432119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.432319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.432353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.432654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.432686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.432974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.433007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.433206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-20 12:37:37.433237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-20 12:37:37.433441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.433475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.433671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.433704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.433895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.433927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.434207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.434241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.434496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.434528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.434711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.434742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.434981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.435016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.435197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.435227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.435421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.435454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.435583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.435615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.435801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.435832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.436027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.436059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.436286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.436318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.436533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.436565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.436864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.436897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.437111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.437146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.437418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.437450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.437629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.437667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.437944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.437988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.438194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.438226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.438368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.438651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.438683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.438887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.438920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.439195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.439228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.439511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.439544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.439824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.439857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.440113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.440149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.440425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.440456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.440742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.440774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.440999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.441033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.441312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.441344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.441553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.441585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.441788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.441821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.442023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.442058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.442333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.442365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.442554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.442585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.442715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.442749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.443025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-20 12:37:37.443058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-20 12:37:37.443268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.443299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.443598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.443631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.443901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.443933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.444175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.444208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.444461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.444492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.444695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.444727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.444940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.444987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.445187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.445220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.445407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.445438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.445588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.445621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.445800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.445831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.446020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.446053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.446325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.446357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.446479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.446511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.446738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.446771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.446972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.447005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.447200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.447231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.447343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.447374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.447585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.447618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.447745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.447782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.447935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.447995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.448251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.448283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.448590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.448621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.448750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.448782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.448997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.449030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.449247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.449278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.449388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.449419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.449614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.449645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.449796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.449828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.450156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.450188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.450441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.450474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.450669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.450700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.450815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.450849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.451060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.451095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.451276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.451310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.451507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.451538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.451735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.451765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.451911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-20 12:37:37.451944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-20 12:37:37.452149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.452180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.452377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.452411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.452525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.452557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.452871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.452903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.453055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.453087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.453210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.453242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.453457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.453488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.453795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.453826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.454143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.454178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.454410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.454441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.454647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.454679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.454862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.454895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.455174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.455207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.455395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.455426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.455719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.455750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.455981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.456015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.456269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.456303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.456574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.456607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.456862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.456893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.457100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.457135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.457408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.457439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.457703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.457751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.458045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.458356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.458387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.458679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.458711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.458918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.458958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.459245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.459276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.459471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.459501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.459758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.459789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.459994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.460027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.460211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.460242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.460494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.460528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.460804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.460836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.461070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.461103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.461294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.461327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.461559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.461590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.461891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.461922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.462220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-20 12:37:37.462253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-20 12:37:37.462552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.462585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.462777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.462810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.463007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.463040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.463233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.463266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.463469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.463499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.463762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.463794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.464007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.464040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.464319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.464351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.464634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.464667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.464876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.464907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.465210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.465507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.465539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.465721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.465753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.466007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.466041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.466261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.466294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.466551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.466584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.466828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.466861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.467156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.467190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.467390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.467421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.467694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.467725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.467910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.467942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.468079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.468111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.468390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.468646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.468684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.468882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.468914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.469124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.469158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.469456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.469487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.469692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.469726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.469916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.469957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.470231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.470262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.470525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.470558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.470742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.470773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.470972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.471005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.471151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.471183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.471398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.471430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.471618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.471649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.471834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-20 12:37:37.471865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-20 12:37:37.472169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.472205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.472396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.472430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.472557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.472587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.472924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.473159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.473193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.473452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.473485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.473696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.473727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.473999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.474033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.474296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.474329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.474570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.474600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.474811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.474843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.475037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.475272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.475305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.475563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.475595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.475814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.475847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.476132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.476164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.476363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.476394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.476546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.476578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.476802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.476834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.477108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.477141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.477351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.477382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.477657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.477687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.477800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.477832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.477971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.478003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.478132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.478164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.478298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.478327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.478520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.478562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.478766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.478796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.479072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.479104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.479240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.479271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.479447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.479478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.479729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.479759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.480009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.480042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.480164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.480195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-20 12:37:37.480378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-20 12:37:37.480409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.480602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.480632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.480760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.480791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.480992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.481024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.481297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.481328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.481539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.481570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.481803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.481835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.482023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.482055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.482275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.482306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.482503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.482534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.482714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.482744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.483016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.483048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.483262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.483292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.483542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.483574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.483846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.483876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.484094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.484126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.484401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.484432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.484662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.484691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.484978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.485011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.485255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.485287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.485492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.485522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.485789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.485821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.486079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.486111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.486314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.486346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.486640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.486672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.486865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.486896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.487185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.487218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.487517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.487548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.487815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.487846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.488087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.488119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.488303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.488333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.488629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.488659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.488861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.488899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.489195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.489228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.489360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.489391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.489606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.489637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.489831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.489862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.490120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-20 12:37:37.490153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-20 12:37:37.490370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.490401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.490666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.490697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.490897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.490927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.491191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.491223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.491491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.491522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.491774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.491805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.491986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.492019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.492243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.492274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.492552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.492583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.492782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.492812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.493014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.493047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.493326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.493359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.493559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.493591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.493843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.493874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.494016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.494049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.494302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.494335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.494651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.494682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.494911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.494941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.495225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.495257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.495537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.495568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.495786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.495816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.496103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.496136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.496388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.496418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.496683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.496713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.497012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.497044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.497274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.497303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.497482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.497513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.497706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.497737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.497989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.498021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.498320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.498352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.498564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.498594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.498871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.498902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.499091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.499123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.499325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.499357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.499535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.499573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.499803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.499833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.500074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.500107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-20 12:37:37.500388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-20 12:37:37.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.500694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.500725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.501016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.501322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.501353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.501583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.501613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.501794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.501825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.502108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.502141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.502423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.502453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.502729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.502759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.502976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.503009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.503290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.503321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.503606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.503638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.503842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.503873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.504125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.504158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.504358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.504388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.504666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.504696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.504892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.504925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.505141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.505173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.505423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.505454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.505706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.505737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.505988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.506020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.506270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.506302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.506557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.506587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.506790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.506820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.507030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.507063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.507279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.507310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.507563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.507595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.507895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.507925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.508198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.508231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.508510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.508541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.508795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.508825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.509021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.509055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.509334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.509365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.509485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.509517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.509787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.509818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.510105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.510138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.510436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.510467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.510737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.510774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-20 12:37:37.511068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-20 12:37:37.511101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.511371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.511403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.511533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.511564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.511866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.512089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.512121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.512375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.512406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.512668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.512700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.512964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.512998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.513299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.513330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.513584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.513616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.513882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.513913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.514150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.514183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.514377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.514410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.514564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.514594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.514864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.514896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.515102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.515135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.515435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.515466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.515668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.515699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.515982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.516014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.516323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.516354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.516615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.516645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.516900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.516930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.517148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.517181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.517435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.517466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.517716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.517746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.517945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.517988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.518246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.518277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.518477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.518509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.518784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.518815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.519012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.519307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.519340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.519641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.519672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.519942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.519986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.520190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.520221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-20 12:37:37.520492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-20 12:37:37.520523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.520805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.520836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.521063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.521096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.521371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.521402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.521665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.521697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.521895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.521932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.522218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.522249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.522550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.522581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.522761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.522791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.523063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.523095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.523348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.523379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.523560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.523591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.523778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.523809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.523985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.524016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.524290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.524321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.524572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.524603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.524855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.524885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.525140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.525175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.525479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.525509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.525744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.525776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.525973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.526008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.526207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.526238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.526459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.526489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.526670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.526702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.526903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.526934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.527133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.527165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.527346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.527378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.527509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.527540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.527796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.527828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.528128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.528162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.528456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.528682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.528713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.529080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.529161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.529460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.529497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.529782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.529814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.529970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.530004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.530305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-20 12:37:37.530337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-20 12:37:37.530596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.530628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.530877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.530909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.531223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.531258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.531537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.531569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.531851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.531883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.532169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.532204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.532420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.532450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.532727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.532757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.532976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.533012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.533240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.533275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.533576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.533609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.533848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.534131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.534165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.534289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.534322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.534518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.534552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.534812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.534844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.535141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.535176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.535321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.535354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.535582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.535615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.535893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.535926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.536079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.536114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.536336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.536368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.536621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.536659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.536974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.537009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.537163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.537195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.537398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.537431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.537629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.537660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.537848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.537882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.538167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.538200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.538481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.538513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.538840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.538873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.539170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.539204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.539400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.539431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.539666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.539700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.539964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.539998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.540179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.540212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.540502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-20 12:37:37.540537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-20 12:37:37.540798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.540833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.540973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.541008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.541263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.541295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.541510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.541541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.541682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.541717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.541988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.542021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.542307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.542342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.542594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.542627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.542848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.542880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.543078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.543111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.543409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.543443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.543706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.543739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.543935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.543983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.544194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.544227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.544433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.544466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.544657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.544689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.544884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.544916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.545128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.545161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.545441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.545474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.545673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.545706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.545886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.545921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.546235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.546268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.546497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.546530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.546795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.546828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.547114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.547149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.547403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.547437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.547743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.547779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.548025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.548060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.548240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.548273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.548576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.548610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.548872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.548903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.549181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.549215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.549339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.549371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.549589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.549618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.549820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.549852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.550106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.550143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-20 12:37:37.550397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-20 12:37:37.550431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.550736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.551040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.551076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.551226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.551267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.551534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.551568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.551693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.551725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.551975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.552010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.552316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.552349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.552636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.552670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.552878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.552915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.553201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.553236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.553426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.553461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.553769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.553803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.554084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.554119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.554309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.554346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.554543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.554578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.554833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.554867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.555175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.555211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.555486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.555519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.555757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.555791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.556052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.556087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.556212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.556245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.556516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.556550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.556797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.556831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.557145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.557182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.557404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.557440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.557657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.557692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.557846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.557879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.558078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.558115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.558434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.558472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.558745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.558778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.559012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.559047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.559248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.559280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.559555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.559589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.559726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.559761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.559971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.560006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.560196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-20 12:37:37.560233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-20 12:37:37.560487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.560522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.560803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.560836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.561124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.561160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.561295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.561330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.561516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.561550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.561753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.561787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.561985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.562020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.562233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.562270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.562464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.562497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.562701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.562735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.562991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.563028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.563216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.563249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.563450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.563486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.563741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.563776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.564083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.564119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.564404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.564439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.564680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.564715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.564903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.564937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.565164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.565200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.565432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.565466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.565727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.565762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.566002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.566039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.566162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.566197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.566457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.566491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.566708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.566744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.567046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.567081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.567361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.567395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.567589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.567625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.567837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.567872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.568058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.568093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.568441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-20 12:37:37.568719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-20 12:37:37.568755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.568884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.568917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.569141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.569177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.569435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.569477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.569776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.569812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.569997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.570032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.570221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.570256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.570476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.570511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.570713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.570747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.571025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.571062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.571345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.571379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.571672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.571706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.571891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.571927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.572147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.572181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.572441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.572476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.572604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.572643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.572903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.572938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.573174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.573208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.573485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.573521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.573705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.573740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.573881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.573916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.574148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.574184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.574371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.574406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.574638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.574827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.574862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.575122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.575157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.575342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.575377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.575564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.575599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.575718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.575752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.576030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.576067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.576185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.576225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.576422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.576456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.576670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.576703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.576985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.577021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.577255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.577288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.577594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.577628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.577749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.577783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-20 12:37:37.577993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-20 12:37:37.578028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.578209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.578244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.578404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.578438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.578759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.578793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.579050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.579087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.579234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.579268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.579523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.579557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.579756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.579790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.580003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.580037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.580294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.580328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.580536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.580569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.580834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.580866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.581072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.581106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.581332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.581595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.581629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.581818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.581851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.582129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.582168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.582443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.582624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.582657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.582910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.582944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.583223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.583259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.583421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.583457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.583727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.583760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.584028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.584065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.584247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.584282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.584553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.584586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.584726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.584760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.585019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.585055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.585326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.585359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.585544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.585875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.585910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.586142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.586178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.586382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.586416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.586708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.586743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.586959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.586997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.587274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.587309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.587501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-20 12:37:37.587533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-20 12:37:37.587801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.587836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.588041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.588077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.588300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.588335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.588543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.588576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.588880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.588914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.589097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.589134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.589390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.589423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.589705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.589740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.590013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.590050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.590309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.590342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.590633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.590667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.590900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.590935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.591149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.591184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.591390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.591424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.591648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.591683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.591969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.592004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.592149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.592183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.592401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.592435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.592687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.592720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.592997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.593033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.593221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.593256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.593457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.593491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.593635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.593669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.593886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.593923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.594068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.594110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.594411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.594446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.594718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.594752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.594967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.595004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.595260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.595295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.595562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.595597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.595884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.595919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.596218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.596254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.596479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.596514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.596766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.596800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.597083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.597119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.597397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-20 12:37:37.597430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-20 12:37:37.597585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.597620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.597806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.597840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.598060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.598096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.598291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.598326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.598507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.598540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.598743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.598776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.599093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.599130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.599283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.599318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.599573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.599608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.599899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.599933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.600207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.600242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.600494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.600529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.600750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.600785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.601047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.601084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.601297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.601331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.601533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.601575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.601778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.601812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.602043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.602079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.602270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.602304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.602419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.602453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.602655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.602689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.602923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.602965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.603269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.603303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.603520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.603555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.603762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.603795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.604071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.604106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.604249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.604284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.604428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.604462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.604717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.604750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.605033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.605069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.605275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.605309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.605562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.605596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.605872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.605906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.606130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.606164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.606415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.606449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.606586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.606619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.606838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.606872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.607133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.607169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-20 12:37:37.607366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-20 12:37:37.607399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.607653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.607688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.607890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.607924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.608120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.608303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.608343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.608593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.608627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.608815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.608850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.609105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.609143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.609347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.609382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.609602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.609636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.609776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.609810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.610039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.610077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.610200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.610234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.610512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.610546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.610750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.610786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.610995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.611030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.611173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.611206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.611519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.611810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.611844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.612120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.612154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.612487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.612523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.612715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.612750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.613011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.613049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.613325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.613359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.613582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.613616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.613876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.613911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.614114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.614153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.614348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.614384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.614594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.614630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.614884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.614918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.615138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.615174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.615389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.615423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.615693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.615728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-20 12:37:37.616023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-20 12:37:37.616060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.616266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.616301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.616521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.616556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.616811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.616846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.617046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.617083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.617243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.617278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.617408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.617443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.617627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.617663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.617861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.617896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.618190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.618226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.618496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.618529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.618821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.618856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.619008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.619044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.619332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.619366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.619621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.619656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.620006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.620286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.620320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.620533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.620568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.620827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.620861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.621061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.621098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.621306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.621340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.621547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.621581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.621888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.621922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.622213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.622248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.622429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.622463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.622731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.622766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.622977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.623012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.623137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.623170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.623452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.623486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.623766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.623801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.624081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.624116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.624380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.624415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.624676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.624712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.624896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.624930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.625221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.625256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.625458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.625493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.625780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.625814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.626077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.626113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-20 12:37:37.626327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-20 12:37:37.626362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.626621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.626660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.626960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.626995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.627274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.627309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.627588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.627623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.627888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.627922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.628076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.628111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.628323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.628357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.628665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.628700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.628838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.628872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.629075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.629111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.629336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.629370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.629593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.629628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.629812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.629846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.630119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.630155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.630359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.630394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.630610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.630646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.630922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.630964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.631162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.631197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.631433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.631468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.631727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.631761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.632058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.632094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.632382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.632417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.632623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.632656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.632934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.632978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.633178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.633212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.633339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.633372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.633657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.633691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.633896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.633936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.634135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.634169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.634425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.634460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.634751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.634785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.635083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.635119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.635333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.635367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.635622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.635655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.635848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.635881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.636080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.636115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.636365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.636399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.636617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-20 12:37:37.636652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-20 12:37:37.636868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.636902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.637194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.637231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.637443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.637478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.637628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.637662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.637938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.637981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.638212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.638245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.638434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.638469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.638748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.638782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.638894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.638928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.639234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.639270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.639521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.639554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.639693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.639727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.639978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.640014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.640221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.640255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.640523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.640557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.640838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.640872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.641153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.641188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.641472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.641506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.641723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.641757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.642010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.642045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.642297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.642330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.642518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.642552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.642829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.642861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.643132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.643167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.643297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.643331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.643515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.643549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.643827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.643862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.644124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.644160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.644456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.644490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.644760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.644794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.645073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.645110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.645305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.645338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.645639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.645672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.645869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.645902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.646135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.646170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.646365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.646399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.646653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.646687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.646971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-20 12:37:37.647007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-20 12:37:37.647331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.647365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.647575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.647607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.647831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.647865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.648127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.648162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.648356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.648389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.648601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.648635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.648923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.648980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.649197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.649231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.649484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.649517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.649696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.649729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.650079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.650257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.650296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.650447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.650481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.650688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.650720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.650932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.650994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.651161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.651197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.651405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.651439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.651641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.651676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.651860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.651894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.652176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.652224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.652442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.652483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.652790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.652829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.653053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.653088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.653281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.653320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.653595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.653630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.653832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.653867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.654068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.654105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.654256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.654290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.654426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.654461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.654598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.654648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.654923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.654980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-20 12:37:37.655191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-20 12:37:37.655227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.872 [2024-11-20 12:37:37.655439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.655473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.655679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.655714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.655924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.655971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.656252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.656286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.656411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.656444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.656647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.656681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.656881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.656915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.657224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.657260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.657453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.657500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.657784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.657834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.658060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.658113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.658282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.658329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.658633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.658682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.658975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.659027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.659204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.659260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.659471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.659521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.659806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.659857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.660072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.660124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.660376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.660428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.660658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.660704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.660997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.661051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.661279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.661323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.661510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.661544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.661762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.661798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.662058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.662094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.662230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.662524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.662559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.662781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.662815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.663053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.663088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.663232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.663267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.663469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.663614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.663647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.663930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.663973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.664159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.664193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.664463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.664496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.664713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.664746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.664975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.665011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.873 qpair failed and we were unable to recover it. 00:27:54.873 [2024-11-20 12:37:37.665210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.873 [2024-11-20 12:37:37.665244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.665437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.665724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.665757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.666010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.666049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.666185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.666226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.666411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.666445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.666635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.666669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.666921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.666965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.667150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.667183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.667303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.667336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.667518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.667552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.667852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.667885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.668195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.668229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.668434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.668468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.668673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.668707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.668905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.668939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.669227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.669261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.669527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.669561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.669747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.669826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.670126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.670165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.670368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.670402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.670538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.670766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.670800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.670945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.670989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.671102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.671135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.671385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.671418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.671612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.671645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.671773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.671807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.672004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.672038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.672258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.672291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.672443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.672476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.672613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.672656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.672909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.672942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.673160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.673193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.673490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.673523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.673654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.673686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.673974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.674010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.674208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.674241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.874 qpair failed and we were unable to recover it. 00:27:54.874 [2024-11-20 12:37:37.674455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.874 [2024-11-20 12:37:37.674488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.674737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.674769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.674968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.675004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.675283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.675316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.675527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.675559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.675752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.675785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.676008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.676043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.676242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.676276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.676480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.676513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.676719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.676752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.677002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.677037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.677289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.677323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.677614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.677647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.677845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.677878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.678181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.678216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.678366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.678399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.678546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.678579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.678831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.678863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.679088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.679361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.679394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.679521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.679555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.679782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.679814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.679957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.679993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.680245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.680278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.680486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.680519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.680796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.680830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.681087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.681122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.681257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.681291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.681494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.681528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.681799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.681832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.682028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.682062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.682334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.682367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.682518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.682551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.682680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.682731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.682853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.682887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.683036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.683071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.683269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.683301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.683517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-20 12:37:37.683550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-20 12:37:37.683781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.683814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.684002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.684036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.684261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.684294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.684545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.684578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.684717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.684750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.684926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.684979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.685103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.685136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.685331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.685364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.685483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.685514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.685641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.685675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.685862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.685895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.686165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.686200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.686412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.686445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.686647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.686681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.686873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.686906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.687062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.687095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.687219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.687252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.687369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.687401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.687593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.687625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.687802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.687835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.688014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.688048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.688269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.688302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.688610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.688644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.688970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.689004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.689193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.689227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.689414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.689446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.689717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.689751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.689969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.690005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.690183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.690216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.690422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.690455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.690669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.690700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.690889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.690921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.691152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.691185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.691374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.691407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.691523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.691556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.691737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.691775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.691921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.691963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.692215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-20 12:37:37.692247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-20 12:37:37.692438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.692471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.692601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.692634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.692846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.692878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.693002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.693285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.693317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.693580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.693613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.693754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.693787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.693976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.694009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.694198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.694231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.694473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.694506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.694714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.694745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.694967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.695002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.695152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.695185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.695301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.695333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.695457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.695488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.695611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.695644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.695914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.695946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.696139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.696171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.696349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.696381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.696570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.696603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.696741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.696773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.697021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.697055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.697182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.697214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.697478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.697510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.697744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.697777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.697995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.698030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.698159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.698192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.698380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.698412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.698523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.698556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.698751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.698784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.699048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.699080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.699299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.699331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-20 12:37:37.699579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-20 12:37:37.699612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.699824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.699856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.700103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.700136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.700381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.700414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.700659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.700691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.700809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.700854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.701033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.701067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.701313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.701346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.701544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.701577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.701713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.701746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.701940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.701983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.702254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.702289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.702403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.702436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.702615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.702648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.702781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.702814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.703081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.703114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.703302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.703334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.703441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.703474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.703738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.703771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.703920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.703960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.704100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.704133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.704327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.704360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.704573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.704605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.704871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.704906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.705041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.705075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.705319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.705352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.705528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.705560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.705767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.705800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.705983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.706018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.706218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.706251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.706436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.706468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.706599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.706631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.706771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.706805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.706928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.706987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-20 12:37:37.707113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-20 12:37:37.707145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.707336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.707368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.707546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.707579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.707758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.707790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.708007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.708041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.708235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.708268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.708487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.708519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.708739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.708771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.708966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.709001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.709129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.709161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.709312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.709345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.709656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.709887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.709919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.710162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.710197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.710464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.710496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.710620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.710652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.710896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.710929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.711212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.711244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.711449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.711481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.711658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.711691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.711868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.711900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.712155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.712189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.712374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.712405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.712689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.712722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.713001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.713036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.713226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.713260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.713450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.713483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.713604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.713637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.713879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.713913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.714132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.714166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.714290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.714323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-20 12:37:37.714534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-20 12:37:37.714567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.714755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.714787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.714987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.715022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.715162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.715196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.715370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.715403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.715647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.715919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.715958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.716208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.716548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.716585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.716883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.716916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.717170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.717207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.717476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.717509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.717620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.717653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.717907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.717939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.718065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.718098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.718343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.718375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.718565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.718599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.718776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.718808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.718938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.718981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.719158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.719191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.719369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.719402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.719673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.719706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.719892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.719925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.720062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.720094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.720204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.720236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.720440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.720473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.720688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.720720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.720913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.720945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.721172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.721206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.721382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.721414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.721538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.721570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.721700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.721733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.721928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.721970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.722191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-20 12:37:37.722224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-20 12:37:37.722497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.722530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.722724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.722756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.722884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.722917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.723195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.723269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.723465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.723502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.723684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.723717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.723844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.723877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.724145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.724181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.724311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.724344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.724531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.724563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.724692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.724725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.724992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.725025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.725279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.725313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.725489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.725538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.725676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.725708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.725976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.726008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.726144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.726178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.726445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.726477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.726664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.726697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.726874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.726906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.727050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.727084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.727213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.727245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.727362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.727395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.727571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.727603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.727813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.727846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.728029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.728062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.728206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.728455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.728488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.728660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.728692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.728869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.728901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.729161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-20 12:37:37.729194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-20 12:37:37.729310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.729342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.729547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.729579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.729848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.729880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.730127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.730161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.730367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.730502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.730535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.730791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.730824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.731019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.731053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.731239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.731272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.731399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.731432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.731554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.731587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.731763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.731795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.732046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.732269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.732302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.732507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.732539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.732815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.732848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.733089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.733122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.733268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.733301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.733484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.733517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.733756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.733788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.733910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.733942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.734090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.734123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.734330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.734368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.734570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.734603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.734785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-20 12:37:37.734818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-20 12:37:37.734999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.735032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.735166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.735199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.735387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.735419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.735634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.735668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.735907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.735939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.736146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.736179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.736308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.736341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.736468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.736499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.736672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.736705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.736944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.736993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.737203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.737236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.737531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.737563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.737739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.737771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.737978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.738012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.738146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.738178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.738303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.738335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.738536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.738569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.738768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.738799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.738919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.738959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.739085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.739117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.739288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.739321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.739533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.739567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.739778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.739810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.739954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.739987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.740177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.740210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.740381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.740414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.740655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.740686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-20 12:37:37.740827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-20 12:37:37.740860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.740992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.741136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.741169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.741288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.741320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.741609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.741642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.741772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.741803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.741929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.741988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.742170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.742202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.742401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.742434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.742616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.742648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.742866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.742904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.743094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.743128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.743312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.743345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.743544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.743576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.743787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.743820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.743941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.743982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.744102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.744135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.744349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.744382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.744501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.744534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.744745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.744777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.744959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.744992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.745181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.745214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.745340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.745372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.745491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.745524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.745709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.745741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.745863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.745896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.746075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.746108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.746284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.746317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.746505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.746537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.746814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.746847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-20 12:37:37.746991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-20 12:37:37.747027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.747242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.747275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.747494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.747526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.747767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.747796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.748006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.748035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.748303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.748518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.748546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.748680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.748708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.748922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.748960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.749152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.749181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.749363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.749573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.749603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.749775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.749803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.750044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.750073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.750249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.750279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.750568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.750597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.750714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.750743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.750858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.750887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.751085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.751116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.751381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.751411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.751595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.751631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.751805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.751974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.752006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.752188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.752218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.752336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.752368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.752560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.752591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.752778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.752809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.753016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.753048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.753169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.753199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.753323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.753353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.753487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-20 12:37:37.753519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-20 12:37:37.753734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.753766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.753889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.753923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.754060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.754092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.754304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.754336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.754457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.754488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.754618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.754650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.754772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.754805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.754998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.755034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.755275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.755307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.755573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.755606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.755730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.755764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.755963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.755998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.756105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.756138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.756320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.756353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.756463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.756497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.756634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.756667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.756814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.756847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.757112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.757146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.757274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.757307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.757430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.757462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.757630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.757661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.757776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.757808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.757987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.758020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.758156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.758188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.758325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.758356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.758563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.758595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.758834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.758867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.759056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.759090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.759268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.759481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.759519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.759652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.759686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.759819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.759852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.760025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-20 12:37:37.760058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-20 12:37:37.760177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.760211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.760345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.760376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.760616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.760648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.760824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.760856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.760970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.761004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.761123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.761156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.761374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.761407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.761589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.761620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.761804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.761837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.762047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.762081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.762217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.762249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.762421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.762453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.762580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.762613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.762806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.762838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.762978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.763012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.763209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.763242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.763348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.763380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.763669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.763700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.763876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.763909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.764061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.764095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.764288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.764321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.764558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.764590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.764791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.764974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.765009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.765214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.765246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.765349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.765381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.765518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.765692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.765724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.765913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-20 12:37:37.765945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-20 12:37:37.766082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.766115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.766360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.766391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.766561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.766594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.766784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.766817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.766996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.767030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.767217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.767251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.767376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.767409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.767599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.767642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.767782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.767815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.768102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.768135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.768330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.768361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.768560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.768593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.768774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.768806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.769000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.769033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.769140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.769173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.769358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.769390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.769506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.769538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.769642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.769675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.769796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.769827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.770093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.770127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.770261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.770294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.770479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.770512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.770654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.770846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.770879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.771081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.771115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.771217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.771249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.771434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.771466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.771618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.771893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-20 12:37:37.771925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-20 12:37:37.772107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.772141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.772344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.772375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.772543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.772575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.772704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.772923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.772966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.773185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.773218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.773394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.773426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.773541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.773574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.773763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.773796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.774015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.774048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.774242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.774275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.774542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.774576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.774768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.774800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.774984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.775017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.775137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.775168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.775348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.775381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.775487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.775519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.775691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.775723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.775846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.775883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.776133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.776166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.776350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.776383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.776582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.776615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.776721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.776753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.776877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.776911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.777030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.777064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.777306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.777337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.777461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.777493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.777653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.777771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.777803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.777915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.777958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.778145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.778177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.778440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.778472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-20 12:37:37.778670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-20 12:37:37.778702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.778918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.778985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.779171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.779202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.779387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.779419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.779618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.779651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.779769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.779801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.779985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.780020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.780258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.780291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.780546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.780577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.780813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.780844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.781037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.781070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.781254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.781285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.781470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.781502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.781735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.781807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.781966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.782007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.782153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.782188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.782378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.782410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.782605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.782637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.782820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.782854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.782981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.783012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.783121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.783154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.783336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.783368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.783619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.783651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.783895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.783927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.784076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.784110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.784279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.784448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.784481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.784619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.784651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.784776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.784808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.784976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.785010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.785130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.785163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.785284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.785318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-20 12:37:37.785489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-20 12:37:37.785523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.785768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.785801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.785920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.785965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.786077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.786110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.786279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.786312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.786428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.786462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.786609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.786642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.786813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.786846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.787032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.787068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.787286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.787319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.787496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.787528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.787660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.787692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.787865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.787896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.788103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.788136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.788270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.788415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.788447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.788549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.788583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.788754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.788787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.788969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.789004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.789170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.789204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.789394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.789426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.789600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.789640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.789764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.789796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.789915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.789955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.790138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.790171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.790295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.790328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.790512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.790543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.790667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.790698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.790805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.790836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-20 12:37:37.791026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-20 12:37:37.791060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.791303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.791336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.791465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.791498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.791764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.791797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.791975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.792008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.792141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.792174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.792387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.792420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.792553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.792585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.792728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.792761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.792892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.792925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.793125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.793156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.793281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.793312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.793427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.793459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.793565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.793596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.793824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.793936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.793980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.794158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.794190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.794311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.794343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.794456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.794489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.794718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.794755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.794895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.794927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.795134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.795166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.795287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.795319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.795508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.795540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.795655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.795686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.795860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.795891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.796099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.796133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.796251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.796281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.796465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.796496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-20 12:37:37.796676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-20 12:37:37.796708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.796896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.796928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.797057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.797090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.797199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.797231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.797427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.797460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.797582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.797616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.797724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.797755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.797934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.797981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.798155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.798187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.798426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.798458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.798579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.798612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.798745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.798777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.798991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.799024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.799152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.799184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.799374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.799405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.799522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.799553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.799745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.799777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.800025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.800229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.800262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.800520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.800552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.800752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.800784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.800917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.800956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.801170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.801205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.801387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.801419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.801539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.801572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.801806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.801838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.802097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.802130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.802252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.802285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.802462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.802495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.802668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.802702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.802822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.802858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-20 12:37:37.802985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-20 12:37:37.803019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.803191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.803226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.803333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.803569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.803602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.803790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.803822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.804065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.804098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.804344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.804376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.804574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.804605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.804810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.804843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.805028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.805062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.805185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.805217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.805368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.805552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.805585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.805785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.805818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.805923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.805965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.806094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.806126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.806350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.806529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.806561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.806813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.806845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.807039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.807074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.807209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.807241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.807429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.807462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-20 12:37:37.807597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-20 12:37:37.807629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.807827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.807860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.808074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.808108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.808312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.808346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.808467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.808500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.808605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.808638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.808824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.808856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.808984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.809017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.809138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.809170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.809351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.809384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.809555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.809587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.809713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.809746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.809852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.809884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.810065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.810099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.810276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.810307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.810428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.810461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.810571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.810602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.810721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.810760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.810879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.810912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.811162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.811194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.811368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.811399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.811503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.811536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.811797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.811828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.812014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.812049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.812227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.812259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.812464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.812496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.812615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.812647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.812852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.812882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.813124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.813157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.813343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.813374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.813558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.813591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.813839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.813871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.814128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.814161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.814436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.814470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.814596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.814627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.814814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.814846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.814980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-20 12:37:37.815013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-20 12:37:37.815182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.815215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.815414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.815447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.815563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.815594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.815720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.815753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.815944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.815990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.816179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.816212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.816406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.816438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.816577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.816610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.816805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.816837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.817021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.817057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.817239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.817270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.817384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.817415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.817541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.817573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.817749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.817782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.817966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.817999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.818186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.818218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.818405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.818438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.818626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.818657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.818855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.818888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.819078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.819110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.819303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.819342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.819579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.819612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.819851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.819883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.820060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.820093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.820334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.820367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.820564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.820596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.820767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.820801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.820977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.821010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.821188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.821221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.821487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.821519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.821639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.821671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.821787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.821819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.821934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.821974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.822159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.822193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.822442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.822475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-20 12:37:37.822673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-20 12:37:37.822706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.822902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.822935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.823094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.823127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.823329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.823361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.823487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.823520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.823624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.823655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.823781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.823813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.823960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.823995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.824247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.824280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.824521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.824552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.824761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.824794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.824981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.825015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.825150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.825183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.825356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.825389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.825575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.825608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.825714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.825746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.825999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.826033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.826221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.826256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.826382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.826413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.826598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.826629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.826884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.826916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.827142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.827174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.827361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.827393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.827522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.827554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.827740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.827773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.827905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.827942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.828199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.828230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.828399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.828500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.828531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.828707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.828739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.828957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.828992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.829174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.829206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.829335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.829366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.829584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.829766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.829797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.829971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.830005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-20 12:37:37.830127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-20 12:37:37.830160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.830288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.830320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.830527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.830653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.830686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.830867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.830901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.831044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.831078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.831187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.831464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.831496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.831611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.831644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.831886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.831920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.832055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.832087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.832227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.832260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.832538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.832572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.832771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.832803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.832979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.833012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.833192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.833225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.833372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.833404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.833518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.833550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.833671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.833703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.833889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.833921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.834039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.834071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.834249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.834280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.834451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.834483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.834597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.834629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.834882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.834914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.835046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.835079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.835259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.835290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.835405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.835437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.835559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.835592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.835764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.835801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.835992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.836025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.836287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.836319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.836456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.836488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.836676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.836707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.836944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.836985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.837111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.837145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-20 12:37:37.837381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-20 12:37:37.837413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.837546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.837579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.837701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.837733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.837912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.837945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.838069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.838101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.838293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.838326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.838512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.838543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.838787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.838820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.839058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.839091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.839282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.839314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.839494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.839525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.839725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.839757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.840028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.840246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.840279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.840401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.840432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.840624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.840657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.840839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.840870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.841050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.841082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.841275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.841308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.841511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.841543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.841764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.841796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.841969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.842001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.842267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.842300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.842413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.842444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.842587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.842624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.842735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.842768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.843004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.843038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.843318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.843352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.843561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.843594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.843795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-20 12:37:37.843828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-20 12:37:37.843946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.843990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.844186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.844218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.844388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.844420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.844609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.844648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.844847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.844881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.845054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.845088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.845272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.845306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.845543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.845575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.845840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.845874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.846008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.846041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.846164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.846197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.846368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.846401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.846509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.846542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.846716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.846925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.846976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.847216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.847250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.847420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.847452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.847650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.847684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.847862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.848086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.848117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.848301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.848335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.848576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.848609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.848811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.848844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.848960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.848995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.849112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.849145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.849386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.849418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.849543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.849575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.849753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.849786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.849902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.849935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.850089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.850122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.850256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.850290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.850460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.850493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.850613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.850655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.850829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.850862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.850984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.851017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.851142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.851174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.851296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.851329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-20 12:37:37.851544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-20 12:37:37.851578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.851787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.851819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.851964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.851998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.852130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.852162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.852278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.852309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.852482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.852514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.852685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.852723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.852983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.853016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.853191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.853224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.853469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.853501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.853713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.853746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.853932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.853972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.854173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.854205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.854329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.854363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.854567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.854599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.854769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.854802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.855063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.855097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.855218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.855250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.855430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.855462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.855586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.855619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.855815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.855847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.856032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.856065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.856240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.856272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.856480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.856512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.856640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.856673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.856795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.856826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.857011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.857044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.857238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.857269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.857442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.857474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.857583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.857617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.857740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.857771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.857942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.857984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.858098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.858130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.858358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.858430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-20 12:37:37.858653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-20 12:37:37.858691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.858897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.858930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.859071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.859105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.859224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.859255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.859446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.859667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.859700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.859880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.859911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.860117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.860150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.860333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.860365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.860473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.860505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.860612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.860645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.860832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.860863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.860977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.861020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.861261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.861292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.861432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.861669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.861701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.861841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.861873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.862124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.862157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.862287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.862319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.862556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.862588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.862710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.862741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.862927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.862972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.863229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.863260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.863532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.863659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.863691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.863867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.863899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.864047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.864080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.864197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.864227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.864414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.864444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.864618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.864650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.864889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.864922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.865107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.865140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.865273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.865306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.865422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.865453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.865574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.865606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.865730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.865763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.865970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.866004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.866181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.866214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-20 12:37:37.866354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-20 12:37:37.866387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.866563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.866634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.866843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.866880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.866993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.867028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.867220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.867253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.867432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.867466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.867704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.867736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.867861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.867894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.868094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.868128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.868395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.868426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.868665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.868698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.868960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.868996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.869179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.869212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.869402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.869434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.869607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.869640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.869846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.869879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.870121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.870156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.870265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.870297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.870492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.870525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.870713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.870747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.870931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.870973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.871117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.871268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.871404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.871554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.871705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.871837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.871967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.872002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.872181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.872219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.872403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.872435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.872673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.872705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.872944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.872986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.873237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.873269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.873462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.873495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.873677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.873709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.873907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.873940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.874176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.874209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.874425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.874458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-20 12:37:37.874582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-20 12:37:37.874615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.874809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.874841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.875106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.875140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.875273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.875305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.875483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.875517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.875710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.875743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.875922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.875964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.876137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.876170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.876313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.876347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.876538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.876571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.876743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.876775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.876960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.876994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.877178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.877212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.877382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.877414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.877528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.877559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.877675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.877709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.877955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.877989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.878183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.878228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.878414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.878446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.878561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.878593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.878740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.878920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.878964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.879178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.879212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.879334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.879368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.879555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.879587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.879840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.879873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.880083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.880117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.880312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.880345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.880587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.880621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.880850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.880882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.881083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-20 12:37:37.881116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-20 12:37:37.881323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.881357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.881520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.881712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.881745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.881867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.881900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.882015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.882048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.882291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.882324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.882509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.882541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.882660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.882694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.882885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.882917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.883115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.883148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.883257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.883290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.883416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.883448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.883572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.883605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.883725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.883764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.883970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.884005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.884174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.884207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.884333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.884366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.884626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.884660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.884841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.884873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.884997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.885031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.885223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.885255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.885379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.885412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.885522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.885555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.885818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.885849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.886032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.886066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.886172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.886205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.886338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.886371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.886499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.886532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.886703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.886735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.886861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.886895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.887028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.887061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.887238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.887270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.887463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.887496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.887610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.887643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.887770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.887802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.888000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.888036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.888156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.888187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.888362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-20 12:37:37.888395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-20 12:37:37.888572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.888604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.888775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.888806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.888922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.888963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.889145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.889178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.889360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.889393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.889506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.889538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.889656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.889689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.889808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.889841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.890022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.890056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.890173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.890207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.890413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.890448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.890632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.890664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.890852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.890885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.891061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.891095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.891278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.891310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.891419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.891451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.891696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.891768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.891982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.892021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.892201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.892235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.892374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.892406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.892598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.892631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.892753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.892786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.892897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.892929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.893173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.893207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.893313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.893345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.893604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.893636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.893807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.893839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.893962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.893995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.894100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.894132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.894343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.894386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.894569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.894600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.894852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.894884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.895026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.895059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.895182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.895214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.895336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.895368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.895500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.895532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.896898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.896965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-20 12:37:37.900617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-20 12:37:37.900675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.900909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.900943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.901153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.901186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.901476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.901671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.901705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.901893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.901926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.902189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.902222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.902413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.902446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.902588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.902619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.902873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.902906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.903040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.903073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.903201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.903234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.903419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.903451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.903642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.903675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.903865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.903896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.904101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.904134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.904640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.904681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.904874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.904906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.905130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.905164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.905493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.905567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.905713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.905750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.905891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.905927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.906136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.906170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.906309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.906342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.906546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.906578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.906712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.906746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.906847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.906880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.907003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.907038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.907177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.907210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.907406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.907438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.907632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.907664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.907936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.907994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.908127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.908160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.908275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.908308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.908428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.908461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.908583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.908616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.908797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.908830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.908969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.909004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-20 12:37:37.909127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-20 12:37:37.909159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.909350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.909382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.909518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.909552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.909669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.909702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.909831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.909863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.910002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.910037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.910176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.910210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.910401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.910435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.910634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.910674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.910857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.910890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.911031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.911065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.911254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.911285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.911407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.911441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.911565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.911598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.911777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.911810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.911931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.911979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.912105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.912137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.912253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.912285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.912469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.912502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.912683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.912716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.912838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.912870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.913080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.913113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.913258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.913462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.913494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.913610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.913642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.913775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.913809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.914072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.914106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.914231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.914265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.914450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.914482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.914662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.914696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.914871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.914904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.915095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.915130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.915248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.915281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.915499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-20 12:37:37.915531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-20 12:37:37.915646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.915679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.915801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.915841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.916049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.916158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.916192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.916296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.916329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.916442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.916475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.916652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.916685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.916872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.916905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.917032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.917066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.917293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.917327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.917524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.917557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.917756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.917790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.917899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.917932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.918067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.918115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.918248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.918280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.918423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.918456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.918586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.918619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.918737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.918770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.918959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.918993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.919236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.919268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.919387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.919419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.919525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.919558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.919680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.919714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.919819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.919851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.920092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.920127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.920393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.920426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.920531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.920565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.920679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.920712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.920914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.920956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.921145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.921179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.921293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.921326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.921495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.921528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.921792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.921825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.921966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.922001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.922178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.922210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.922401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.922434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.922567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.922599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-20 12:37:37.922747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-20 12:37:37.922780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.922885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.922918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.923168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.923201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.923377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.923410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.923527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.923559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.923787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.923859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.924069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.924107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.924285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.924319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.924468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.924500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.924775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.924808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.925049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.925085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.925223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.925254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.925372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.925404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.925524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.925556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.925669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.925701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.925937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.925982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.926095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.926127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.926316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.926348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.926485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.926526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.926792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.926826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.926960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.926995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.927165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.927196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.927301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.927333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.927432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.927467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.927673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.927706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.927839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.927871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.928046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.928081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.928268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.928301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.928543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.928576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.928883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.928915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.929046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-20 12:37:37.929080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-20 12:37:37.929319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.929352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.929534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.929567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.929698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.929730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.929857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.929889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.930026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.930061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.930188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.930220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.930457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.930488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.930612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.930643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.930771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.930802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.931001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.931035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.931209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.931241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.931365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.931399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.931584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.931616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.931787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.931819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.932034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.932072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.932199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.932233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.932413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.932448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.932582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.932616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.932821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.932854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.933111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.933145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.933325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.933357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.933489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.933522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.933642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.933675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.933796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.933829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.934015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.934050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.934239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.934270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.934534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.934568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.934687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.934719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.934852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.934885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.935071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.935105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.935287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.935320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.935536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.935569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.935681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.935714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.935888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.935923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.936083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.936116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.936305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.936339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.936469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-20 12:37:37.936503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-20 12:37:37.936702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.936734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.936915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.936960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.937143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.937176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.937291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.937324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.937578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.937617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.937752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.937787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.937920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.937963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.938079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.938113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.938329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.938361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.938575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.938608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.938849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.938882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.939065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.939101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.939329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.939362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.939487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.939521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.939639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.939672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.939852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.939885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.940087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.940122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.940398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.940432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.940624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.940658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.940895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.940928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.941226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.941260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.941456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.941490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.941728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.941955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.941988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.942228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.942261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.942528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.942560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.942683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.942887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.942921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.942976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d65af0 (9): Bad file descriptor 00:27:54.912 [2024-11-20 12:37:37.943188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.943223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.943424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.943455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.943580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.943613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.943863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.943896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.944123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.944158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.944408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.944440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.944561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.944593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.944780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.944812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.944982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.945015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-20 12:37:37.945200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-20 12:37:37.945233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.945374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.945406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.945541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.945574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.945755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.945788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.945914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.945956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.946073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.946105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.946273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.946305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.946484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.946522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.946700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.946908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.946941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.947242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.947275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.947446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.947478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.947698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.947730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.947931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.947976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.948217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.948249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.948372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.948405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.948572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.948811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.948844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.949087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.949122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.949226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.949260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.949464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.949495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.949689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.949721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.949919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.949960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.950203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.950422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.950455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.950640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.950672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.950855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.950888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.951057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.951246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.951278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.951484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.951517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.951712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.951744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.951918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.951956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.952230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-20 12:37:37.952263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-20 12:37:37.952502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.952535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.952647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.952919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.952961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.953096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.953129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.953368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.953400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.953521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.953554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.953740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.953771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.953899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.953931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.954046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.954079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.954332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.954364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.954574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.954606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.954794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.954827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.955114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.955148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.955318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.955351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.955568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.955607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.955736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.955768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.955903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.955935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.956120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.956152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.956415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.956448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.956700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.956732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.956915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.956957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.957089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.957122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.957306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.957338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.957471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.957503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.957622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.957655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.957838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.957870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.958052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.958085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.958212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.958244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.958370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.958405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.958517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.958549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.958754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.958786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.958982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.959015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.959267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.959299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.959420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.959452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.959637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.959669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.959857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.959889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.960098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.960132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-20 12:37:37.960397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-20 12:37:37.960429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.960628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.960660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.960855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.960888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.961039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.961073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.961264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.961296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.961416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.961447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.961684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.961717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.961911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.961944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.962137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.962170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.962308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.962341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.962462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.962495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.962611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.962643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.962766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.962798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.962922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.962964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.963101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.963133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.963313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.963346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.963463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.963495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.963605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.963642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.963767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.963800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.964009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.964044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.964179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.964210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.964333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.964365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.964588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.964621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.964752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.964783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.964966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.965000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.965248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.965279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.965413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.965445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.965629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.965662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.965845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.965878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.966059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-20 12:37:37.966272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-20 12:37:37.966304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.966429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.966462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.966639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.966672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.966878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.966910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.967068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.967102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.967223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.967378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.967409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.967541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.967573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.967680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.967713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.218 [2024-11-20 12:37:37.967892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.218 [2024-11-20 12:37:37.967924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.218 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.968109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.968143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.968328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.968360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.968475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.968507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.968678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.968710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.968899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.968933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.969090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.969122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.969228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.969260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.969375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.969407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.969584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.969615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.969739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.969771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.969899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.969931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.970048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.970081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.970264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.970298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.970461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.970494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.970663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.970695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.970910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.971122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.971155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.971395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.971438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.971681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.971713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.971970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.972005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.972192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.972225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.972400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.972434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.972620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.972654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.972839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.972873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.973165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.973200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.973374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.973406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.973637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.973669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.973888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.973920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.974059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.974091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.974310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.974343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.974455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.974487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.974607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.974640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.974898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.974930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.975132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.975165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.975352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.975384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.975511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.219 [2024-11-20 12:37:37.975543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.219 qpair failed and we were unable to recover it. 00:27:55.219 [2024-11-20 12:37:37.975666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.975699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.975966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.976000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.976179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.976211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.976414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.976446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.976620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.976651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.976886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.976919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.977039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.977072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.977207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.977239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.977365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.977398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.977515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.977548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.977824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.977856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.977978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.978011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.978277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.978311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.978519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.978551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.978857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.978889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.979088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.979121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.979299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.979331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.979451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.979483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.979610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.979641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.979849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.979881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.980061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.980094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.980209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.980246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.980453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.980485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.980661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.980694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.980820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.980966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.981000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.981198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.981231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.981434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.981467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.981646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.981679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.981802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.981834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.982053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.982087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.982269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.982302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.982424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.982456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.982570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.982602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.982786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.982818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.983000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.983034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.983165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.983472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.220 [2024-11-20 12:37:37.983503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.220 qpair failed and we were unable to recover it. 00:27:55.220 [2024-11-20 12:37:37.983685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.983718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.983934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.983974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.984182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.984215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.984343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.984374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.984494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.984526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.984662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.984694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.984868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.984899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.985077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.985109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.985351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.985384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.985629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.985662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.985907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.985940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.986135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.986167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.986285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.986317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.986502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.986534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.986703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.986736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.986853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.986886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.987065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.987099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.987283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.987317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.987489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.987522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.987732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.987930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.987985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.988120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.988153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.988333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.988364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.988610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.988832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.988865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.989058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.989091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.989217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.989250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.989369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.989402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.989590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.989622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.989802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.989835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.989966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.990000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.990176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.990208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.990348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.990382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.990699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.990967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.991001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.991143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.991175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.991371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.221 [2024-11-20 12:37:37.991404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.221 qpair failed and we were unable to recover it. 00:27:55.221 [2024-11-20 12:37:37.991610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.991642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.991818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.991850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.991987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.992021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.992252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.992284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.992466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.992498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.992626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.992658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.992894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.992927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.993113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.993145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.993264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.993295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.993487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.993519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.993701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.993733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.993931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.993971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.994150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.994183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.994316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.994349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.994590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.994622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.994752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.994784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.994901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.994933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.995121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.995154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.995418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.995451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.995567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.995598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.995728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.995760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.995881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.995914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.996168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.996201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.996394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.996426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.996599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.996631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.996831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.996864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.996990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.997029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.997211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.997245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.997377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.997409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.997525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.997558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.997830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.997862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.997972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.998005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.998271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.998304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.998513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.998544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.998665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.998697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.998886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.998918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.999226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.222 [2024-11-20 12:37:37.999260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.222 qpair failed and we were unable to recover it. 00:27:55.222 [2024-11-20 12:37:37.999434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:37.999466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:37.999640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:37.999671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:37.999919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:37.999960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.000148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.000182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.000367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.000400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.000524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.000556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.000658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.000691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.000807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.000839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.000979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.001014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.001252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.001284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.001406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.001438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.001553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.001585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.001706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.001738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.001911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.001944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.002200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.002235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.002436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.002468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.002689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.002885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.002919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.003118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.003150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.003334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.003368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.003549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.003580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.003698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.003732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.003990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.004024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.004217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.004251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.004507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.004540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.004783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.004816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.004925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.004967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.005181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.005213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.005334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.005368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.005603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.005641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.223 [2024-11-20 12:37:38.005854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.223 qpair failed and we were unable to recover it. 00:27:55.223 [2024-11-20 12:37:38.006029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.006064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.006249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.006281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.006452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.006484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.006595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.006629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.006867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.006900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.007080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.007115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.007292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.007326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.007457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.007489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.007673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.007705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.007818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.007849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.007975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.008008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.008221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.008253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.008380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.008413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.008610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.008642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.008764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.008797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.008930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.008968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.009153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.009186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.009384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.009415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.009556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.009588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.009765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.009798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.009916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.009958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.010188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.010222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.010411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.010443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.010632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.010665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.010842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.010874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.011064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.011099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.011222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.011255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.011520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.011551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.011813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.011845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.012032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.012066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.012261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.012293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.012415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.012636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.012668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.012846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.012878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.013131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.013164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.013351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.013385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.013506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.013540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.224 [2024-11-20 12:37:38.013650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.224 [2024-11-20 12:37:38.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.224 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.013925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.013975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.014111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.014143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.014271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.014304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.014565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.014598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.014717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.014749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.014932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.014974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.015165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.015198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.015385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.015417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.015606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.015638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.015826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.015857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.016035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.016069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.016264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.016297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.016426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.016459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.016652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.016684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.016818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.016851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.016967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.017002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.017194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.017336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.017368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.017542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.017575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.017758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.017791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.017939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.018065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.018097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.018307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.018339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.018521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.018554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.018738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.018770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.018970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.019003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.019126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.019158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.019442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.019476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.019598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.019630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.019811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.019843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.020083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.020117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.020359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.020390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.020580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.020613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.020798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.020830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.021005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.021039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.021225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.021258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.021439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.225 [2024-11-20 12:37:38.021472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.225 qpair failed and we were unable to recover it. 00:27:55.225 [2024-11-20 12:37:38.021644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.021676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.021863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.021895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.022107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.022140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.022380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.022418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.022545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.022578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.022755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.022787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.022911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.022944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.023146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.023178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.023357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.023388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.023577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.023609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.023734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.023767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.023959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.024004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.024112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.024145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.024252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.024285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.024485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.024516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.024640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.024671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.024796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.024829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.025048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.025083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.025264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.025298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.025415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.025447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.025696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.025728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.025848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.025880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.026059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.026093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.026275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.026308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.026411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.026442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.026694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.026726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.026864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.026896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.027181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.027214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.027513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.027645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.027678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.027837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.027909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.028136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.028173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.028389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.028422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.028569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.028602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.028778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.028809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.028917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.028963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.029178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.029210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.226 [2024-11-20 12:37:38.029384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.226 [2024-11-20 12:37:38.029416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.226 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.029621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.029653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.029770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.029803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.029918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.029960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.030158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.030191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.030372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.030404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.030594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.030636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.030848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.030880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.031073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.031107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.031283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.031315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.031529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.031562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.031733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.031765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.032007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.032040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.035169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.035206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.035395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.035426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.035598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.035628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.035809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.035841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.035973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.036004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.036183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.036452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.036484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.036682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.036715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.036828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.036860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.037125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.037159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.037296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.037328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.037517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.037550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.037669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.037701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.037823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.037856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.038165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.038198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.038391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.038676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.038709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.038894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.038926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.039186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.039218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.039406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.039440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.039570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.039604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.039723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.039756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.039963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.039998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.040203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.040235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.227 qpair failed and we were unable to recover it. 00:27:55.227 [2024-11-20 12:37:38.040344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.227 [2024-11-20 12:37:38.040376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.040514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.040546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.040814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.040845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.040985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.041018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.041140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.041173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.041364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.041397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.041530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.041562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.041686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.041718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.041927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.041971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.042170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.042208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.042399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.042431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.042719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.042752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.042941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.042982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.043169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.043201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.043388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.043539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.043571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.043754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.043788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.043930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.043972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.044147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.044180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.044368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.044400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.044640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.044673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.044782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.044815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.045076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.045112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.045246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.045279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.045487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.045519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.045708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.045740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.045913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.045945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.046172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.046205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.046333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.046364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.046495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.046527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.046721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.046754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.046926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.046971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.047091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.228 [2024-11-20 12:37:38.047123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.228 qpair failed and we were unable to recover it. 00:27:55.228 [2024-11-20 12:37:38.047238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.047271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.047483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.047515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.047778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.047809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.048022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.048057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.048186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.048218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.048426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.048606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.048639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.048848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.048881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.049013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.049048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.049232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.049265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.049478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.049510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.049692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.049723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.049916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.049955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.050219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.050253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.050443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.050476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.050601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.050635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.050808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.050840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.050976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.051010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.051276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.051309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.051490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.051523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.051712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.051745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.051867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.051900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.052090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.052387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.052418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.052607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.052639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.052815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.052847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.053040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.053074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.053288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.053320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.053454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.053485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.053617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.053650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.053783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.053816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.053934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.053988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.054180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.054213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.054460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.054493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.054746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.054779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.054893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.054926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.055159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.229 [2024-11-20 12:37:38.055192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.229 qpair failed and we were unable to recover it. 00:27:55.229 [2024-11-20 12:37:38.055429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.055462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.055581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.055614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.055787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.055819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.055991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.056133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.056163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.056293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.056327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.056499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.056536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.056797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.056830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.057006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.057041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.057279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.057311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.057530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.057564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.057830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.057862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.058079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.058113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.058396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.058662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.058695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.058802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.058835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.058971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.059208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.059240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.059437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.059469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.059621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.059654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.059832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.059865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.059990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.060023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.060209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.060242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.060486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.060517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.060688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.060721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.060843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.060876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.061106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.061269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.061302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.061492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.061526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.061724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.061756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.061931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.061992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.062191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.062223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.062348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.062380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.062572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.062604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.062800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.062832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.062967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.063001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.063207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.230 [2024-11-20 12:37:38.063239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.230 qpair failed and we were unable to recover it. 00:27:55.230 [2024-11-20 12:37:38.063430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.063584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.063616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.063811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.063842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.064101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.064135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.064342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.064374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.064557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.064589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.064780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.064812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.064926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.064969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.065073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.065105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.065285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.065323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.065442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.065475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.065588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.065620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.065817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.065850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.066019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.066054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.066167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.066199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.066397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.066430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.066675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.066708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.066970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.067005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.067130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.067163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.067280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.067312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.067486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.067518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.067692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.067724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.067844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.067876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.068053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.068087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.068261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.068293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.068414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.068446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.068617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.068649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.068779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.068811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.069018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.069050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.069157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.069189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.069454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.069486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.069653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.069836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.069869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.070051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.070086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.070338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.070371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.070489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.070522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.070660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.070691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.070875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.231 [2024-11-20 12:37:38.070907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.231 qpair failed and we were unable to recover it. 00:27:55.231 [2024-11-20 12:37:38.071100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.071134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.071395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.071427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.071694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.071726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.071903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.071935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.072058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.072277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.072310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.072492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.072524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.072635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.072667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.072849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.072881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.073061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.073095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.073334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.073368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.073577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.073619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.073856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.073888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.074077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.074111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.074348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.074381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.074643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.074674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.074846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.074879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.075095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.075129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.075306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.075338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.075546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.075579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.075842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.075875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.075992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.076026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.076157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.076190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.076396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.076428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.076532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.076563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.076753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.076786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.076906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.076938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.077117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.077150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.077387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.077419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.232 [2024-11-20 12:37:38.077546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.232 [2024-11-20 12:37:38.077578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.232 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.077789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.077820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.078027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.078061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.078326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.078359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.078651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.078683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.078843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.079096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.079130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.079394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.079425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.079603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.079636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.079904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.079937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.080068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.080101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.080412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.080714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.080747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.080990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.081024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.081209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.081241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.081482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.081515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.081755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.081787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.081996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.082030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.082251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.082284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.082496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.082528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.082754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.082963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.082996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.083186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.083224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.083409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.083441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.083728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.083759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.083968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.084001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.084239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.084272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.084443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.084476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.084746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.084778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.084901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.084932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.085121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.085154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.085279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.085310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.085562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.085594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.085773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.085806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.085995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.086029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.086208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.086241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.086490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.086523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.086725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.086758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.233 [2024-11-20 12:37:38.086929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.233 [2024-11-20 12:37:38.086970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.233 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.087235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.087267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.087390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.087422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.087541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.087573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.087672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.087705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.087820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.087852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.087970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.088004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.088268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.088300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.088420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.088452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.088583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.088614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.088793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.088826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.089008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.089043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.089164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.089459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.089491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.089657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.089691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.089874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.089905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.090051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.090086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.090195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.090227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.090401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.090434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.090628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.090661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.090840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.090872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.091047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.091083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.091368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.091401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.091638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.091669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.091771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.091810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.091970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.092004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.092186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.092218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.092350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.092382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.092564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.092596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.092835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.092868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.093110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.093144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.093351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.093384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.093503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.093536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.093714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.093746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.093934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.093984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.094096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.094128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.094368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.094401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.234 [2024-11-20 12:37:38.094569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.234 [2024-11-20 12:37:38.094601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.234 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.094779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.094811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.094979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.095013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.095202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.095234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.095439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.095471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.095576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.095609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.095847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.095879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.096084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.096117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.096330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.096514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.096546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.096734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.096766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.096960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.096994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.097117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.097149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.097390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.097423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.097670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.097702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.097885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.097917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.098111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.098144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.098335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.098366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.098553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.098585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.098722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.098754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.098879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.098910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.099179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.099212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.099383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.099416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.099523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.099555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.099820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.099852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.100029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.100062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.100295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.100327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.100445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.100482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.100603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.100635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.100750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.100782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.101056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.101090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.101270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.101302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.101481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.101513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.101721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.101753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.101975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.235 [2024-11-20 12:37:38.102009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.235 qpair failed and we were unable to recover it. 00:27:55.235 [2024-11-20 12:37:38.102141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.102174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.102298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.102330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.102570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.102602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.102843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.102875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.103053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.103087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.103245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.103381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.103413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.103600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.103632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.103890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.103922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.104063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.104095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.104290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.104322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.104590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.104622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.104859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.104890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.105027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.105060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.105242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.105274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.105395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.105426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.105662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.105694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.105930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.105980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.106163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.106195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.106397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.106429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.106630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.106661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.106846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.106877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.107010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.107044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.107216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.107248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.107434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.107466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.107579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.107611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.107840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.107873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.107974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.108007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.108206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.108238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.108374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.108406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.108529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.108560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.108797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.108830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.109090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.109129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.109314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.109346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.109539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.109572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.109823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.109855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.236 qpair failed and we were unable to recover it. 00:27:55.236 [2024-11-20 12:37:38.110098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.236 [2024-11-20 12:37:38.110132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.110305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.110337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.110578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.110610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.110882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.110914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.111194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.111228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.111405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.111438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.111606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.111638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.111821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.111853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.111982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.112015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.112189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.112222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.112418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.112450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.112623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.112655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.112891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.112924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.113062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.113094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.113196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.113229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.113337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.113369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.113560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.113593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.113844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.113877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.114078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.114112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.114299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.114332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.114517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.114550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.114799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.114832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.115014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.115048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.115159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.115191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.115461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.115654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.115686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.115880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.115912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.116049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.116082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.116211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.116245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.116363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.237 [2024-11-20 12:37:38.116395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.237 qpair failed and we were unable to recover it. 00:27:55.237 [2024-11-20 12:37:38.116635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.116668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.116844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.116876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.117078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.117112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.117370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.117402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.117612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.117645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.117908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.117941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.118199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.118242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.118436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.118468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.118649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.118682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.118880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.118913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.119092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.119125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.119295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.119327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.119443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.119475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.119648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.119679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.119800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.119832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.120016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.120050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.120233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.120266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.120445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.120476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.120600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.120633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.120897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.120929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.121051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.121206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.121238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.121420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.121452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.121581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.121613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.121812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.121844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.121969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.122003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.122175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.122209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.122419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.122451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.122699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.122731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.122991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.123025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.123266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.123299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.123486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.123517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.123703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.123736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.123867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.123901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.124097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.124130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.238 [2024-11-20 12:37:38.124298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.238 [2024-11-20 12:37:38.124331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.238 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.124444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.124475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.124723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.124755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.125034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.125068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.125310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.125342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.125470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.125502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.125683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.125714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.125925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.126156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.126189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.126370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.126401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.126609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.126640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.126809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.126847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.127024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.127058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.127245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.127276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.127456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.127488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.127683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.127716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.127894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.127926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.128056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.128090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.128208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.128241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.128455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.128488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.128688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.128721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.128895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.128928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.129047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.129079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.129266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.129298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.129486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.129518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.129649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.129681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.129855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.129887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.130092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.130246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.130278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.130466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.130498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.130686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.130718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.130888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.130919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.131176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.131209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.131454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.131485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.131702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.131735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.131860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.131892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.132166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.132199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.132391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.239 [2024-11-20 12:37:38.132423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.239 qpair failed and we were unable to recover it. 00:27:55.239 [2024-11-20 12:37:38.132612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.132645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.132826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.132858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.132993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.133027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.133269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.133302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.133430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.133462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.133752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.133784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.134025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.134328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.134360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.134545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.134578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.134781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.134814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.135026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.135060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.135194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.135227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.135466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.135498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.135700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.135738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.136024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.136059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.136176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.136208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.136469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.136502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.136685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.136717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.136844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.136875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.136990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.137024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.137213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.137245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.137425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.137457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.137695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.137728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.137851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.137883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.138064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.138098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.138278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.138310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.138426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.138459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.138664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.138696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.138869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.138900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.139093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.139126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.139235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.139266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.139440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.139471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.139592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.139624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.139861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.139892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.240 [2024-11-20 12:37:38.140112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.240 [2024-11-20 12:37:38.140145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.240 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.140281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.140313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.140431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.140462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.140640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.140673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.140842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.140874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.141056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.141089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.141401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.141472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.141736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.141772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.141968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.142004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.142291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.142324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.142509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.142541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.142679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.142711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.142885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.142916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.143243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.143277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.143476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.143507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.143690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.143722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.143916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.143956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.144090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.144121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.144237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.144269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.144444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.144486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.144723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.144756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.144929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.144974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.145143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.145174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.145413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.145444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.145684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.145715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.145904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.145936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.146190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.146221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.146485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.146517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.146689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.146721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.146922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.146965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.147148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.147179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.147370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.147402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.147527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.147559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.147802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.147834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.147969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.148002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.148185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.148217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.148402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.148435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.241 [2024-11-20 12:37:38.148715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.241 [2024-11-20 12:37:38.148747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.241 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.148876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.148908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.149113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.149145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.149318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.149350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.149479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.149513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.149749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.149780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.149976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.150009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.150300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.150332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.150507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.150539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.150749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.150781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.150903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.150934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.151118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.151150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.151348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.151380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.151564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.151596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.151787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.151819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.152057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.152090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.152273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.152305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.152424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.152455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.152645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.152677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.152869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.152902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.153046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.153080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.153263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.153296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.153411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.153448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.153652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.153685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.153856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.153888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.154156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.154188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.154356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.154387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.154558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.154590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.154798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.155018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.155050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.155226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.155258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.155400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.155433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.155612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.242 [2024-11-20 12:37:38.155644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.242 qpair failed and we were unable to recover it. 00:27:55.242 [2024-11-20 12:37:38.155841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.155872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.156074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.156107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.156289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.156321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.156505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.156538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.156653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.156685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.156871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.156902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.157033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.157066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.157334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.157367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.157537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.157569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.157684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.157716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.157999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.158034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.158301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.158333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.158592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.158624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.158912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.158944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.159154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.159187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.159432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.159463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.159654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.159686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.160040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.160073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.160261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.160292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.160513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.160545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.160646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.160679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.160851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.160883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.161107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.161140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.161256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.161288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.161421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.161452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.161567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.161600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.161736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.161768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.161962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.161995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.162183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.162220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.162400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.162432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.163828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.163882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.164176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.164212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.164476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.164509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.164694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.164728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.164847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.164878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.165007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.165041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.165165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.243 [2024-11-20 12:37:38.165197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.243 qpair failed and we were unable to recover it. 00:27:55.243 [2024-11-20 12:37:38.165323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.165355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.165535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.165567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.165693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.165725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.165915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.165959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.166133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.166166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.166367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.166400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.166584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.166616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.166770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.166988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.167020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.167142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.167174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.167353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.167386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.167572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.167604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.167856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.167889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.168079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.168112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.168241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.168273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.168395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.168427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.168672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.168704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.168944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.168988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.169178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.169216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.169335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.169367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.169500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.169532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.169715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.169747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.169988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.170022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.170213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.170246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.170453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.170485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.170597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.170629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.170739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.170772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.170915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.170968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.171146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.171180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.171284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.171317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.171520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.171551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.171731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.171765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.171985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.172018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.172152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.172185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.172385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.172417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.172557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.172588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.172825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.172857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.173100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.244 [2024-11-20 12:37:38.173134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.244 qpair failed and we were unable to recover it. 00:27:55.244 [2024-11-20 12:37:38.173312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.173344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.173462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.173495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.173673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.173705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.173878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.174000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.174033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.174206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.174238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.174355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.174387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.174575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.174607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.174780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.174813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.175048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.175081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.175263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.175296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.175419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.175451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.175622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.175653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.175834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.175867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.176037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.176070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.176276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.176309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.176516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.176548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.176728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.176761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.176944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.176985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.177190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.177222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.177344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.177394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.177501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.177534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.177662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.177694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.177866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.177897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.178098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.178131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.178303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.178335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.178595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.178627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.178798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.178829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.179017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.179051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.179236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.179269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.179376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.179407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.179607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.179639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.179744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.179776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.180067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.180266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.180299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.180551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.180583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.180752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.180784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.245 [2024-11-20 12:37:38.180980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.245 [2024-11-20 12:37:38.181013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.245 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.181189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.181221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.181392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.181424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.181659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.181690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.181808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.181841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.181961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.181996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.182184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.182215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.182388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.182420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.182532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.182564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.182691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.182723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.182836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.182868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.183003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.183036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.183299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.183330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.183518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.183551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.183675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.183708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.183969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.184002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.184191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.184224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.184432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.184464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.184589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.184621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.184889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.184920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.185039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.185070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.185308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.185339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.185559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.185591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.185730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.185767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.185965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.185998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.186258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.186290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.186473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.186504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.186681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.186713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.186884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.186917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.187075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.187109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.187283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.187314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.187564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.187595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.187726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.187758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.187929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.187973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.188079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.246 [2024-11-20 12:37:38.188111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.246 qpair failed and we were unable to recover it. 00:27:55.246 [2024-11-20 12:37:38.188238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.188271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.188461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.188493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.188683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.188716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.189007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.189134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.189166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.189311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.189344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.189466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.189497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.189616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.189649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.189760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.189792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.191262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.191319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.191551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.191584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.191822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.191855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.192048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.192081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.192253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.192286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.192496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.192529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.192654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.192684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.192801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.192833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.193017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.193050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.193176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.193208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.193314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.193344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.193478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.193511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.193633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.193664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.193831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.193861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.194053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.194086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.194267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.194298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.194548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.194579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.194692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.194723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.195019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.195137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.195175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.195353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.195384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.195561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.195592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.195713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.195743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.196010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.196042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.196263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.196295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.196436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.196467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.196593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.196625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.196827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.247 [2024-11-20 12:37:38.196858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.247 qpair failed and we were unable to recover it. 00:27:55.247 [2024-11-20 12:37:38.197035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.197068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.197285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.197317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.197514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.197547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.197659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.197690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.197864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.197896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.198129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.198167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.198281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.198312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.198452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.198484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.198659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.198690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.198973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.199007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.199227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.199260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.199378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.199410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.199514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.199545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.199674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.199706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.199972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.200004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.200128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.200160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.200363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.200395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.200502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.200533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.200655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.200687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.200910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.200941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.201083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.201114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.201305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.201337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.201600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.201632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.201823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.201854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.201983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.202015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.202143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.202174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.202354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.202385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.202508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.202539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.202841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.202873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.202986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.203018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.203216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.203247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.203358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.203395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.203574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.203605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.203738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.203946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.203988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.204101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.204132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.204235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.248 [2024-11-20 12:37:38.204266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.248 qpair failed and we were unable to recover it. 00:27:55.248 [2024-11-20 12:37:38.204459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.204490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.204751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.204782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.204899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.204930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.205114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.205144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.205260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.205291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.205428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.205459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.205580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.205611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.205730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.205761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.205888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.205919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.206065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.206096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.206204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.206235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.206414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.206445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.206581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.206612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.206795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.206825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.206923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.206989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.207102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.207132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.207331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.207362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.207481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.207511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.207686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.207718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.207838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.207870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.208112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.208144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.208259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.208291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.208467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.208497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.208628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.208660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.208785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.208816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.208992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.209024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.209193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.209224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.209405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.209436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.209568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.209850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.209881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.210075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.210107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.210224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.210255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.210514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.210545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.210733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.210764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.210934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.210979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.211251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.211281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.211416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.211446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.211660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.249 [2024-11-20 12:37:38.211691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.249 qpair failed and we were unable to recover it. 00:27:55.249 [2024-11-20 12:37:38.211927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.211969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.212102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.212134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.212317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.212348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.212468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.212498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.212677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.212709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.212968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.213000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.213185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.213217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.213326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.213364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.213643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.213673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.213858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.213889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.214032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.214066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.214172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.214203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.214335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.214366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.214597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.214628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.214812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.214842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.214985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.215018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.215146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.215177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.215293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.215323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.215491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.215650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.215682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.215813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.215844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.216057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.216090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.216304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.216336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.216525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.216557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.216675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.216705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.216883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.216914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.217112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.217145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.217282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.217312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.217570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.217603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.217783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.217813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.218000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.218033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.218245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.218277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.218444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.218476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.218676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.218706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.218989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.219023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.219209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.219241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.219484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.219522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.219657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.250 [2024-11-20 12:37:38.219690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.250 qpair failed and we were unable to recover it. 00:27:55.250 [2024-11-20 12:37:38.219821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.219853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.220047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.220079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.220337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.220368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.220606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.220637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.220754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.220785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.220976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.221008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.221224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.221254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.221381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.221412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.221601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.221633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.221798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.221830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.221956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.221989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.222192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.222225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.222477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.222508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.222695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.222727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.222919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.222977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.223215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.223247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.223413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.223443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.223558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.223590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.223765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.223796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.223902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.223933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.224134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.224167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.224401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.224602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.224633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.224839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.224871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.225006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.225038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.225289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.225361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.225584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.225621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.225803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.225835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.226034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.226070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.226322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.226355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.226571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.226602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.226725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.226756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.226940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.226985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.227102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.251 [2024-11-20 12:37:38.227134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.251 qpair failed and we were unable to recover it. 00:27:55.251 [2024-11-20 12:37:38.227318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.227349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.227590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.227622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.227721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.227751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.227982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.228014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.228207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.228239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.228365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.228397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.228593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.228623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.228863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.228895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.229080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.229112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.229217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.229248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.229371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.229402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.229583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.229614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.229749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.229888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.229923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.230136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.230166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.230336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.230367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.230551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.230583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.230718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.230749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.230877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.230913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.231146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.231211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.231347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.231383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.231618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.231756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.231787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.232074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.232107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.232238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.232269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.232447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.232478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.232657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.232688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.232819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.232851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.233041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.233073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.233176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.233206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.233332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.233364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.233496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.233536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.233793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.233824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.233966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.234000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.234125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.234155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.234409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.234439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.234547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.234578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.234703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.252 [2024-11-20 12:37:38.234734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.252 qpair failed and we were unable to recover it. 00:27:55.252 [2024-11-20 12:37:38.234973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.235006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.235186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.235217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.235322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.235353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.235470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.235500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.235670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.235701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.235830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.235861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.236096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.236129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.236254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.236285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.236526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.236557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.236680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.236710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.236877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.236907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.237041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.237073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.237252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.237283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.237477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.237507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.237639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.237670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.237905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.237936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.238120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.238152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.238327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.238358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.238473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.238504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.238639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.238669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.238834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.238904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.239053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.239089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.239275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.239307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.239500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.239532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.239637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.239668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.239920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.239964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.240165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.240196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.240323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.240354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.240487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.240517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.240624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.240655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.240862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.240894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.241119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.241151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.241325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.241357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.241467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.241499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.241743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.241774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.241971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.242004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.242183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.242213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.253 [2024-11-20 12:37:38.242349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.253 [2024-11-20 12:37:38.242381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.253 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.242582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.242614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.242733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.242764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.242877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.242909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.243100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.243133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.243333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.243363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.243484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.243515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.243696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.243728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.243984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.244017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.244208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.244239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.244447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.244478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.244601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.244632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.244801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.244832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.245026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.245058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.245175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.245207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.245381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.245601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.245632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.245812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.245842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.245973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.246004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.246269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.246301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.246472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.246502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.246697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.246727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.246840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.246870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.247046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.247085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.247253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.247284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.247397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.247429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.247562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.247593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.247875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.247905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.248031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.248062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.248245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.248276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.248442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.248472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.248595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.248626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.248803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.248835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.249111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.249144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.249260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.249290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.249484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.249515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.249718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.249748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.249960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.249993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.254 [2024-11-20 12:37:38.250113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.254 [2024-11-20 12:37:38.250143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.254 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.250325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.250356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.250529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.250561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.250794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.250825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.251010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.251194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.251226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.251342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.251372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.251633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.251664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.251780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.251811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.251946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.252175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.252207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.252417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.252447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.252630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.252661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.252783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.252814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.253029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.253061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.253177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.253207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.253331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.253361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.253572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.253603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.253783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.253813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.254095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.254127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.254308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.254340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.254522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.254553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.254720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.254750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.254964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.254996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.255169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.255200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.255382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.255418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.255528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.255559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.255858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.255889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.256067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.256098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.256282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.256314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.256519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.256548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.256717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.256749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.256930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.256971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.257107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.257138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.257246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.257278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.257398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.257428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.257593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.257623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.257739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.257770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.255 [2024-11-20 12:37:38.257942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.255 [2024-11-20 12:37:38.257986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.255 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.258236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.258268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.258467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.258497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.258637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.258668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.258857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.258889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.259006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.259039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.259210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.259240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.259347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.259378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.259553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.259585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.259708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.259738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.259925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.259966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.260080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.260113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.260284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.260314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.260435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.260466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.260600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.260632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.260733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.260765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.260960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.260993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.261097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.261139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.261335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.261365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.261556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.261586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.261697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.261727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.261865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.261897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.262085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.262117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.263578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.263631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.263854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.263886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.265274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.265325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.265583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.265616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.265737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.265776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.265997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.266031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.266263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.266386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.266417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.266591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.266622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.266884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.266915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.256 [2024-11-20 12:37:38.267094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.256 [2024-11-20 12:37:38.267126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.256 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.267331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.267361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.267533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.267564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.267697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.267728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.267912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.267943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.268194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.268224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.268411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.268442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.268559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.268589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.268822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.268854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.268990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.269023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.269217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.269248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.269455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.269486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.269588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.269618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.269803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.269837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.269966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.269997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.270194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.270402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.270433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.270625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.270655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.270777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.270812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.270940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.271010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.271193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.271225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.271353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.271384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.271534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.271650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.271681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.271878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.271908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.272041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.272073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.272193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.272225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.272438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.272469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.272733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.272764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.272890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.272921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.273049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.273080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.273252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.273283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.273466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.273497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.273677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.273708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.273893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.273930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.274147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.274179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.274355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.274384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.274504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.257 [2024-11-20 12:37:38.274535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.257 qpair failed and we were unable to recover it. 00:27:55.257 [2024-11-20 12:37:38.274724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.274756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.274872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.274904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.275068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.275100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.275203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.275234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.275422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.275452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.275568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.275600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.275780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.275811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.275981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.276014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.276144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.276175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.276355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.276385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.276510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.276541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.276659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.276689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.276854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.276886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.276994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.277027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.277153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.277184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.277306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.277337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.277532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.277562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.277761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.277792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.277977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.278008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.278178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.278209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.278334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.278366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.278616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.278648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.278749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.278779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.279044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.279077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.279201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.279233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.279359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.279389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.279634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.279663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.279797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.279828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.280001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.280034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.280218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.280250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.280442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.280472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.280595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.280626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.280796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.280827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.280929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.280969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.281082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.281114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.281282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.258 [2024-11-20 12:37:38.281313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.258 qpair failed and we were unable to recover it. 00:27:55.258 [2024-11-20 12:37:38.281491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.281528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.281642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.281673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.281796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.281827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.282013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.282046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.282240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.282271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.282462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.282492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.282616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.282646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.282767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.282982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.283015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.283201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.283232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.283420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.283449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.283551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.283581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.283794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.283826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.284033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.284065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.284270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.284302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.284473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.284504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.284716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.284746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.284937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.284977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.285164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.285197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.285393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.285424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.285603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.285635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.285817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.285848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.286028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.286059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.286167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.286199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.286397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.286429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.286608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.286638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.286750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.286782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.286919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.286958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.287141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.287173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.287345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.287375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.287622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.287654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.287873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.288000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.288035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.288179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.288211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.288413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.288445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.288571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.288602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.288734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.288765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.259 qpair failed and we were unable to recover it. 00:27:55.259 [2024-11-20 12:37:38.288898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.259 [2024-11-20 12:37:38.288929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.289041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.289074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.289267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.289297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.289403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.289439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.289651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.289683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.289787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.289818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.289971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.290004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.290179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.290210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.290332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.290362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.260 [2024-11-20 12:37:38.290558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.260 [2024-11-20 12:37:38.290595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.260 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.290760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.290791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.291031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.291063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.291176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.291209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.291472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.291502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.291630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.291661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.291848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.291880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.291982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.292014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.292209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.292241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.292422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.292453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.292690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.292722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.292827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.292857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.293055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.293087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.293219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.293248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.293418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.293448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.293569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.293600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.293726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.293754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.294038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.294071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.294179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.294211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.294338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.294383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.294534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.294569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.574 [2024-11-20 12:37:38.294750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.574 [2024-11-20 12:37:38.294782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.574 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.294981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.295015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.295203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.295361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.295391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.295528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.295560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.295692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.295723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.295899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.295929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.296098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.296132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.296310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.296342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.296467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.296500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.296632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.296663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.296840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.296873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.297005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.297038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.297208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.297247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.297391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.297423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.297598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.297630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.297733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.297764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.297962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.297994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.298156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.298285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.298317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.298429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.298462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.298578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.298607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.298777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.298806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.298979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.299218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.299250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.299433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.299465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.299593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.299625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.299812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.299844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.300021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.300054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.300227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.300259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.300449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.300480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.575 [2024-11-20 12:37:38.300589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.575 [2024-11-20 12:37:38.300619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.575 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.300757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.300788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.300896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.301058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.301091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.301344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.301374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.301488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.301520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.301634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.301666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.301846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.301878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.302075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.302109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.302299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.302384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.302613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.302664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.302906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.302973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.303180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.303228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.303429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.303480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.303648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.303697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.303842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.303877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.304067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.304100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.304270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.304301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.304444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.304476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.304612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.304642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.304825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.304858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.304992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.305025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.305218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.305256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.305374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.305405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.305505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.305538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.305668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.305700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.305830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.305861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.306067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.306103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.306228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.306259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.306445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.306476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.306606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.576 [2024-11-20 12:37:38.306638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.576 qpair failed and we were unable to recover it. 00:27:55.576 [2024-11-20 12:37:38.306832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.306862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.306974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.307006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.307125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.307158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.307347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.307379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.307498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.307724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.307754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.307936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.307981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.308165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.308195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.308310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.308342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.308465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.308496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.308594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.308625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.308733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.308766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.308898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.308934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.309132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.309164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.309278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.309309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.309498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.309529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.309652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.309685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.309855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.309886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.310014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.310047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.310178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.310210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.310415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.310446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.310627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.310658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.310788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.310819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.310927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.310990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.311111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.311142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.311246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.311276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.311386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.311417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.311661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.311691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.311805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.311836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.311970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.312004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.577 [2024-11-20 12:37:38.312126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.577 [2024-11-20 12:37:38.312157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.577 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.312387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.312419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.312523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.312554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.312746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.312777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.312909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.312940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.313069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.313101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.313265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.313296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.313412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.313443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.313617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.313648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.313759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.313790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.313969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.314107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.314281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.314423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.314565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.314726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.314876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.314907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.315155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.315187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.315359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.315521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.315552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.315724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.315755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.315872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.315903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.316029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.316060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.316316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.316347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.316475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.316505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.316615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.316646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.316757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.316787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.316924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.316967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.317143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.317180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.317363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.317393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.317503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.317533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.317710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.317742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.578 [2024-11-20 12:37:38.317941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.578 [2024-11-20 12:37:38.317982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.578 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.318106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.318137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.318315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.318347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.318473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.318502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.318632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.318663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.318771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.318802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.318923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.318984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.319104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.319135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.319322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.319354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.319462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.319493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.319612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.319644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.319844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.319876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.320000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.320032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.320143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.320174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.320295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.320327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.320442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.320474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.320581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.320613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.320720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.320752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.321008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.321041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.321221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.321253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.321367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.321398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.321505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.321536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.321649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.321681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.321809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.321841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.322945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.322982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.579 [2024-11-20 12:37:38.323107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.579 [2024-11-20 12:37:38.323138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.579 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.323329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.323360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.323477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.323509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.323618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.323651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.323759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.323790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.324028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.324066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.324171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.324202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.324416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.324539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.324570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.324680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.324711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.324841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.324879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.325887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.325919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.326029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.326060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.326316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.326347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.326453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.326483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.326596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.326628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.326842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.326874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.327055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.327088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.327204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.327234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.327393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.327506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.327537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.327647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.327678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.327853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.327884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.328002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.328035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.328173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.328204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.328321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.328352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.328527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.328558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.328673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.328703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.328830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.328861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.329047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.329080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.329269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.329300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.580 [2024-11-20 12:37:38.329411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.580 [2024-11-20 12:37:38.329441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.580 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.329554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.329585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.329759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.329790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.329904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.329935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.330099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.330224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.330256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.330357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.330387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.330559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.330589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.330830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.330866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.330983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.331154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.331313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.331514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.331654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.331809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.331921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.331942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.332972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.332992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.333977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.334122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.334368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.334473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.334599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.334702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.581 [2024-11-20 12:37:38.334813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.581 qpair failed and we were unable to recover it. 00:27:55.581 [2024-11-20 12:37:38.334898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.334919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.335902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.335989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.336941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.336979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.337075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.337101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.337291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.337318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.337409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.337436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.337546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.337573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.337664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.337691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.337853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.337881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.338088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.338280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.338311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.338413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.338439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.338622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.338650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.338750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.338777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.338958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.338986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.339103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.339130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.339252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.339279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.339460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.339487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.339585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.339611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.582 qpair failed and we were unable to recover it. 00:27:55.582 [2024-11-20 12:37:38.339777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.582 [2024-11-20 12:37:38.339805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.339923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.339955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.340888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.340913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.341046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.341143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.341169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.341344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.341371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.341563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.341590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.341699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.341725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.341823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.341850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.342877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.342975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.343894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.343993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.344020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.344117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.344144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.344298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.344368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.344503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.344539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.344721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.344753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.583 [2024-11-20 12:37:38.344869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.583 [2024-11-20 12:37:38.344901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.583 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.345023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.345056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.345158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.345188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.345391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.345628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.345658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.345764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.345793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.345922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.345988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.346112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.346145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.346251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.346280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.346392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.346422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.346544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.346578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.346698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.346727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.346821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.346849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.347039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.347069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.347170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.347199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.347374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.347404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.347530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.347722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.347751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.347894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.347923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.348066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.348096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.348212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.584 [2024-11-20 12:37:38.348241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.584 qpair failed and we were unable to recover it. 00:27:55.584 [2024-11-20 12:37:38.348372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.348401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.348513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.348541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.348803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.348833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.349069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.349287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.349322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.349509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.349541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.349658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.349690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.349809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.349842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.350047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.350080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.350260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.350293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.350503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.350535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.350638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.350669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.350773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.350802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.350995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.351026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.351153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.351184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.351305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.351335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.351535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.351575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.351715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.351747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.351926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.351970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.352234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.352264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.352449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.352480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.352589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.352621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.352804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.352836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.352981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.353014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.353132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.353166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.353346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.353377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.353585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.353616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.353809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.353841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.353961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.353994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.354258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.354290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.354473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.354506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.354772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.354801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.354994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.355025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.355143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.355172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.355341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.355370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.355479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.355509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.355637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.355666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.355909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.355939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.356053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.356082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.356245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.356288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.585 qpair failed and we were unable to recover it. 00:27:55.585 [2024-11-20 12:37:38.356465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.585 [2024-11-20 12:37:38.356498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.356673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.356704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.356902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.356934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.357119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.357158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.357290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.357321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.357446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.357477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.357675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.357707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.357917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.357959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.358079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.358111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.358279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.358310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.358577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.358610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.358802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.358835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.359009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.359043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.359251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.359284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.359478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.359510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.359645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.359677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.359814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.359846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.359967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.360001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.360119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.360152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.360374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.360408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.360614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.360646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.360835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.360866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.360984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.361018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.361148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.361180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.361382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.361413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.361532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.361563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.361830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.361862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.362051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.362084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.362201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.362232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.362426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.362460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.362592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.362628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.362745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.362776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.362970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.363182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.363214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.363333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.363365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.363547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.363579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.363687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.363719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.363846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.363878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.364050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.364084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.364251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.364366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.364398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.364599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.364630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.586 qpair failed and we were unable to recover it. 00:27:55.586 [2024-11-20 12:37:38.364805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.586 [2024-11-20 12:37:38.364838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.365037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.365069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.365269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.365541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.365573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.365698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.365728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.365917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.365956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.366152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.366184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.366359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.366392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.366582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.366613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.366854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.366885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.367011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.367045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.367178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.367210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.367408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.367441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.367557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.367589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.367825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.367857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.368034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.368066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.368264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.368296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.368441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.368473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.368667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.368700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.368887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.368920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.369069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.369102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.369218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.369249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.369433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.369651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.369683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.369801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.369832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.370098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.370132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.370341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.370374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.370587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.370618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.370922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.370961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.371163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.371200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.371378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.371410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.587 [2024-11-20 12:37:38.371598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.587 [2024-11-20 12:37:38.371629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.587 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.371891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.371922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.372065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.372098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.372287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.372319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.372532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.372564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.372689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.372720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.372842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.372875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.373008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.373043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.373241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.373273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.373452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.373485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.373611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.373642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.373834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.373877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.374079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.374113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.374322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.374353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.374491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.374523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.374638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.374670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.374789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.374821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.375060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.375093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.588 [2024-11-20 12:37:38.375199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.588 [2024-11-20 12:37:38.375232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.588 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.375345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.375376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.375586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.375618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.375793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.375825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.376084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.376117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.376301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.376333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.376528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.376560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.376825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.376856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.377075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.377108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.377296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.377329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.377507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.589 qpair failed and we were unable to recover it. 00:27:55.589 [2024-11-20 12:37:38.377645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.589 [2024-11-20 12:37:38.377677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.378035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.378107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.378327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.378362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.378537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.378568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.378789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.378820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.379057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.379092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.379305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.379338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.379575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.379608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.379806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.379836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.380090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.590 [2024-11-20 12:37:38.380152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.590 qpair failed and we were unable to recover it. 00:27:55.590 [2024-11-20 12:37:38.380357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.380392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.380569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.380602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.380785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.380817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.380939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.380981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.381235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.381267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.381476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.381508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.381634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.381665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.381836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.381868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.591 [2024-11-20 12:37:38.382001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.591 [2024-11-20 12:37:38.382034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.591 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.382152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.382184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.382350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.382384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.382507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.382538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.382661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.382693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.382874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.382906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.383105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.383384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.383416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.383547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.383579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.383764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.592 [2024-11-20 12:37:38.383797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.592 qpair failed and we were unable to recover it. 00:27:55.592 [2024-11-20 12:37:38.383929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.383972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.384219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.384249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.384354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.384386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.384530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.384562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.384740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.384772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.385042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.385075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.385247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.385279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.385517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.385549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.385674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.385706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.385850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.385882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.386078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.386111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.386228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.386260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.386463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.386496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.386676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.386707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.386834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.386866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.593 qpair failed and we were unable to recover it. 00:27:55.593 [2024-11-20 12:37:38.386990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.593 [2024-11-20 12:37:38.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.387150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.387182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.387351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.387383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.387590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.387621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.387743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.387774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.387965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.387998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.388177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.388215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.388404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.388436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.388736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.388939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.388981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.389100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.389132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.389261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.389292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.389416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.389448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.389575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.389607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.389733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.389765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.389968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.390001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.390118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.390150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.390362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.390393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.390515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.390546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.390680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.390712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.390833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.390865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.390973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.391006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.391275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.391306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.391495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.391526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.391723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.391754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.391868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.391900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.392112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.392144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.392269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.392300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.392414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.392446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.392623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.392655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.392773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.392804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.392928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.392992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.393178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.393209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.393329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.393361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.393484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.594 [2024-11-20 12:37:38.393515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.594 qpair failed and we were unable to recover it. 00:27:55.594 [2024-11-20 12:37:38.393620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.393652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.393788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.393993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.394025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.394151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.394184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.394309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.394456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.394488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.394676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.394708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.394878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.394910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.395050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.395083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.395262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.395294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.395403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.395435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.395559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.395596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.395717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.395749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.395864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.395897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.396085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.396118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.396232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.396264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.396376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.396408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.396523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.396555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.396661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.396692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.396973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.397007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.397126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.397158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.397264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.397295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.397434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.397465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.397658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.397689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.397808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.397839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.397968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.398001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.398170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.398202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.398309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.398340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.398528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.398559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.398756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.398788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.398968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.399001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.595 [2024-11-20 12:37:38.399109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.595 [2024-11-20 12:37:38.399140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.595 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.399260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.399291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.399462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.399493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.399680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.399711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.399827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.399858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.399969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.400124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.400278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.400427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.400579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.400712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.400925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.400982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.401181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.401212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.401321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.401353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.401474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.401506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.401635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.401666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.401846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.401877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.402009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.402041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.402142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.402174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.402357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.402389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.402509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.402547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.402729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.402760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.402868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.402899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.403010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.403042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.403213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.403244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.403356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.403387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.403499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.403530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.403652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.403683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.403882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.403912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.596 [2024-11-20 12:37:38.404098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.596 [2024-11-20 12:37:38.404130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.596 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.404241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.404272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.404395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.404426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.404664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.404695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.404811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.404843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.404973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.405007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.405178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.405209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.405332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.405363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.405469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.405500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.405615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.405645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.405841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.405872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.406064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.406097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.406288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.406319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.406446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.406478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.406628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.406799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.406831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.407004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.407074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.407207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.407244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.407502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.407535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.407656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.407688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.407818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.407849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.407971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.408117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.408273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.408412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.408567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.408707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.408869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.408899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.409038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.409071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.409239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.409272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.409444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.409476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.597 qpair failed and we were unable to recover it. 00:27:55.597 [2024-11-20 12:37:38.409582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.597 [2024-11-20 12:37:38.409621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.409752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.409785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.409897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.409928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.410083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.410116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.410303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.410335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.410512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.410544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.410658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.410690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.410873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.410906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.411043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.411076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.411201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.411233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.411407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.411438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.411560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.411591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.411712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.411743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.411963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.411997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.412112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.412142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.412268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.412299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.412478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.412508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.412620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.412651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.412769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.412800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.412974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.413007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.413268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.413299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.413421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.413453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.413571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.413602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.413723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.413753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.413867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.413898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.414092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.414125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.414232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.414263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.598 [2024-11-20 12:37:38.414452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.598 [2024-11-20 12:37:38.414483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.598 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.414661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.414693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.414817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.414849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.415070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.415197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.415229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.415370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.415402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.415586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.415618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.415792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.415966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.416000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.416181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.416213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.416380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.416410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.416529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.416560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.416682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.416714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.416992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.417033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.417145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.417176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.417284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.417315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.417543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.417574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.417683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.599 [2024-11-20 12:37:38.417715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.599 qpair failed and we were unable to recover it. 00:27:55.599 [2024-11-20 12:37:38.417909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.417941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.418150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.418182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.418355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.418386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.418517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.418548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.418736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.418767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.418868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.418900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.419148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.419180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.419363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.419643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.419675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.419852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.419884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.420080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.420112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.420291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.420322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.420506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.420538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.420722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.420752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.420857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.420889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.421197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.421230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.421421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.421453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.421567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.421598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.421841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.421873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.422073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.422105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.422296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.422328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.422429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.422461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.422663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.422695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.422824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.422855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.423113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.423146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.423363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.423394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.423567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.423599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.423782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.423812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.423940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.423982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.424168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.424200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.600 [2024-11-20 12:37:38.424337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.600 [2024-11-20 12:37:38.424367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.600 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.424556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.424587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.424792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.424824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.425011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.425043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.425225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.425257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.425502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.425541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.425652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.425684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.425859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.425889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.426078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.426111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.426296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.426327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.426437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.426468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.426642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.426673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.426868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.426901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.427095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.427127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.427250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.427280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.427469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.427501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.427614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.427652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.427840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.427871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.428107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.428140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.428352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.428383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.428509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.428541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.428726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.428758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.428961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.428994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.429100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.429132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.429314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.429345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.429516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.429547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.429734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.429766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.430031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.430244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.430275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.430471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.430506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.430760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.601 [2024-11-20 12:37:38.430793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.601 qpair failed and we were unable to recover it. 00:27:55.601 [2024-11-20 12:37:38.430915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.430946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.431164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.431195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.431319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.431470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.431500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.431704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.431735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.431914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.431944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.432147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.432179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.432357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.432387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.432674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.432707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.432895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.432927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.433162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.433196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.433323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.433354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.433616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.433648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.433772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.433804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.433942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.433991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.434103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.434135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.434238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.434271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.434507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.434538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.434664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.434696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.434872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.435095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.435129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.435273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.435304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.435604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.435635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.435760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.435792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.435976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.436009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.436250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.436282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.436404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.436436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.436563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.436594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.602 [2024-11-20 12:37:38.436722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.602 [2024-11-20 12:37:38.436755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.602 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.436959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.436992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.437101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.437133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.437244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.437275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.437466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.437498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.437628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.437660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.437843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.437875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.438082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.438116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.438304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.438336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.438463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.438495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.438616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.438648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.438846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.438878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.439050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.439082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.439347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.439419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.439573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.439610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.439800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.439833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.440081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.440115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.440292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.440324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.440494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.440526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.440700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.440731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.440973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.441243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.441275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.441398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.441428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.441606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.441637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.441870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.441901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.442160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.442381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.442422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.442684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.442714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.442970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.443003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.443268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.443299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.443411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.443442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.443572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.443603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.443776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.443807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.443989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.444022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.444259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.444291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.444469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.444502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.444677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.444707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.444816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.444846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.603 qpair failed and we were unable to recover it. 00:27:55.603 [2024-11-20 12:37:38.445091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.603 [2024-11-20 12:37:38.445124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.445244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.445276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.445407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.445439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.445557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.445753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.445993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.446025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.446196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.446339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.446371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.446505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.446537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.446660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.446691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.446817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.446848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.447021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.447053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.447171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.447202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.447435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.447466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.447704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.447735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.604 [2024-11-20 12:37:38.447926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.604 [2024-11-20 12:37:38.447969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.604 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.448222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.448254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.605 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.448374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.448405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.605 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.448588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.448620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.605 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.448863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.448893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.605 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.449109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.449143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.605 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.449330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.449360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.605 qpair failed and we were unable to recover it. 00:27:55.605 [2024-11-20 12:37:38.449616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.605 [2024-11-20 12:37:38.449647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.606 qpair failed and we were unable to recover it. 00:27:55.606 [2024-11-20 12:37:38.449776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.606 [2024-11-20 12:37:38.449807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.606 qpair failed and we were unable to recover it. 00:27:55.606 [2024-11-20 12:37:38.450049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.606 [2024-11-20 12:37:38.450081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.450215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.450247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.450416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.450447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.450622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.450652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.450822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.450863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.451045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.451077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.451195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.451225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.607 [2024-11-20 12:37:38.451328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.607 [2024-11-20 12:37:38.451360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.607 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.451639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.451670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.451857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.451889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.452140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.452172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.452285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.452317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.452495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.452525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.452668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.452780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.452811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.453047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.608 [2024-11-20 12:37:38.453263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.608 [2024-11-20 12:37:38.453296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.608 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.453491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.453521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.453731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.453761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.453944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.453986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.454178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.454209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.454388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.454418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.454687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.454718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.454838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.454869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.455107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.609 [2024-11-20 12:37:38.455139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.609 qpair failed and we were unable to recover it. 00:27:55.609 [2024-11-20 12:37:38.455318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.455349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.455587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.455618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.455802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.455832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.455955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.455986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.456156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.456186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.456384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.456415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.456672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.456704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.456806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.456837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.610 [2024-11-20 12:37:38.456961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.610 [2024-11-20 12:37:38.456992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.610 qpair failed and we were unable to recover it. 00:27:55.611 [2024-11-20 12:37:38.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.611 [2024-11-20 12:37:38.457144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.611 qpair failed and we were unable to recover it. 00:27:55.611 [2024-11-20 12:37:38.457407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.611 [2024-11-20 12:37:38.457438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.611 qpair failed and we were unable to recover it. 00:27:55.611 [2024-11-20 12:37:38.457690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.611 [2024-11-20 12:37:38.457720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.611 qpair failed and we were unable to recover it. 00:27:55.611 [2024-11-20 12:37:38.457909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.611 [2024-11-20 12:37:38.457941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.611 qpair failed and we were unable to recover it. 00:27:55.611 [2024-11-20 12:37:38.458065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.611 [2024-11-20 12:37:38.458097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.611 qpair failed and we were unable to recover it. 00:27:55.611 [2024-11-20 12:37:38.458228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.458259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.458518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.458549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.458695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.458727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.458855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.458885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.458994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.459029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.459211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.459248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.459439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.459469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.459591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.459623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.459857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.459888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.460170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.460202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.460378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.460409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.460641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.460673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.460788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.612 [2024-11-20 12:37:38.460819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.612 qpair failed and we were unable to recover it. 00:27:55.612 [2024-11-20 12:37:38.460990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.461023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.461220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.461253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.461444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.461474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.461592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.461622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.461787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.461918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.461958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.462157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.462189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.462424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.462456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.613 [2024-11-20 12:37:38.462718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.613 [2024-11-20 12:37:38.462750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.613 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.462925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.462977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.614 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.463168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.463199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.614 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.463401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.463433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.614 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.463543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.463574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.614 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.463699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.463730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.614 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.463844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.614 qpair failed and we were unable to recover it. 00:27:55.614 [2024-11-20 12:37:38.464014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.614 [2024-11-20 12:37:38.464047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.464286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.464318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.464499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.464531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.464702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.464734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.464845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.464878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.465080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.465112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.465282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.465314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.465550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.465581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.615 [2024-11-20 12:37:38.465711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.615 [2024-11-20 12:37:38.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.615 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.466007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.466040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.466153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.466183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.466350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.466382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.466483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.466514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.466643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.466673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.466845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.466875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.467050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.467083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.467255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.467286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.467455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.467492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.467605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.467636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.467870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.467900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.616 qpair failed and we were unable to recover it. 00:27:55.616 [2024-11-20 12:37:38.468130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.616 [2024-11-20 12:37:38.468164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.468344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.468376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.468506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.468537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.468752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.468783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.468914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.468945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.469168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.469200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.469333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.469364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.469629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.469660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.469849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.469879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.469999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.617 [2024-11-20 12:37:38.470033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.617 qpair failed and we were unable to recover it. 00:27:55.617 [2024-11-20 12:37:38.470206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.470238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.470374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.470406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.470594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.470625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.470748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.470779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.470990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.471023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.471217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.471249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.471443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.471474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.471658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.471689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.471871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.471903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.472040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.472074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.472258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.618 [2024-11-20 12:37:38.472288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.618 qpair failed and we were unable to recover it. 00:27:55.618 [2024-11-20 12:37:38.472399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.472431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.472608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.472639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.472968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.473090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.473122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.473242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.473273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.473514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.473544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.473720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.473751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.473969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.474003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.474177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.474208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.474407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.474437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.474663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.474694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.474814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.474845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.475096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.475129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.475235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.475266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.475509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.475540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.475713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.475744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.475929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.475977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.476162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.476194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.476378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.476409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.476528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.476668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.476699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.476938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.477001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.477136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.477168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.477309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.619 [2024-11-20 12:37:38.477341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.619 qpair failed and we were unable to recover it. 00:27:55.619 [2024-11-20 12:37:38.477457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.477490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.477750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.477781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.477978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.478012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.478194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.478226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.478489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.478521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.478730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.478761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.478891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.478923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.479199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.479230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.479427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.479458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.479669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.479701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.479822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.479852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.479968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.480001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.480100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.480132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.480322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.480352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.480524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.480555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.480725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.480757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.480945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.480988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.481185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.481215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.481400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.481431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.481625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.481656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.481846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.481877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.482071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.482313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.482344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.482459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.482490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.482681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.482712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.482897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.482928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.483062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.483093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.483277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.483308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.483481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.483513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.483622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.483653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.483783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.483813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.484086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.484119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.484307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.484345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.484542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.484573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.484789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.484820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.485005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.485038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.485247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.620 [2024-11-20 12:37:38.485278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.620 qpair failed and we were unable to recover it. 00:27:55.620 [2024-11-20 12:37:38.485470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.485501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.485708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.485739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.485865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.485896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.486153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.486186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.486380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.486411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.486588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.486619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.486836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.486868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.487058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.487092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.487197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.487228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.487360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.487391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.487561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.487592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.487762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.487794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.487911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.487942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.488071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.488103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.488235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.488266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.488457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.488489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.488728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.488760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.488946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.488988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.489183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.489215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.489452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.489483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.489607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.489639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.489836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.489867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.490081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.490119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.490296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.490328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.490459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.490491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.490694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.490726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.490897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.490928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.491127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.491159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.491344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.491374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.491575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.491696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.491727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.491848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.491880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.492125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.621 [2024-11-20 12:37:38.492159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.621 qpair failed and we were unable to recover it. 00:27:55.621 [2024-11-20 12:37:38.492339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.492371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.492551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.492581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.492757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.492787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.492914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.492972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.493101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.493133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.493380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.493411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.493511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.493541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.493670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.493701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.493874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.493904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.494096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.494128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.494321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.494352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.494489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.494519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.494784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.494814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.495057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.495090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.495219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.495250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.495372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.495403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.495527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.495559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.495691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.495722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.495913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.495944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.496147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.496177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.496415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.496446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.496568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.496601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.496770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.496800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.496972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.497262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.497294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.497529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.497560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.497795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.497827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.497966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.497998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.498203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.498234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.498476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.498512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.498636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.498668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.498857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.498888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.499006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.499038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.499213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.499245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.499482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.499514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.499646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.499678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.499801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.499834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.500110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.500143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.622 [2024-11-20 12:37:38.500319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.622 [2024-11-20 12:37:38.500350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.622 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.500518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.500548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.500646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.500675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.500795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.500826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.500935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.500979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.501205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.501237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.501414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.501445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.501566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.501597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.501799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.501830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.502016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.502049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.502296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.502328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.502516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.502546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.502671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.502702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.502825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.502856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.503074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.503107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.503279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.503311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.503505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.503537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.503705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.503736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.503933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.503978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.504229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.504260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.504528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.504560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.504750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.504780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.504979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.505012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.505220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.505252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.505445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.505477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.505666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.505696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.505807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.505838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.505963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.505997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.506172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.506203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.506317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.506348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.506515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.506547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.506734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.506771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.507033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.507065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.507183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.507215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.507450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.507480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.507598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.507629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.507747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.507778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.507891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.623 [2024-11-20 12:37:38.507922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.623 qpair failed and we were unable to recover it. 00:27:55.623 [2024-11-20 12:37:38.508117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.508149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.508274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.508306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.508593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.508623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.508806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.508837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.509010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.509049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.509229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.509260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.509443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.509475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.509680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.509712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.509956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.509989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.510173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.510206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.510330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.510361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.510565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.510596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.510768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.510800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.510927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.510986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.511234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.511264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.511396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.511427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.511557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.511588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.511788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.511819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.512024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.512056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.512241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.512272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.512402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.512434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.512673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.512704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.512874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.512905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.513049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.513081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.513281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.513312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.513420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.513450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.513630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.513662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.513873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.513903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.514130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.514161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.514433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.514464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.514585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.514615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.514720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.514749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.514995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.515027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.515214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.515250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.515425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.515455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.515706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.515737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.624 [2024-11-20 12:37:38.515975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.624 [2024-11-20 12:37:38.516007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.624 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.516113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.516144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.516311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.516342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.516559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.516590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.516715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.516746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.516925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.516971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.517098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.517129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.517257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.517288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.517407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.517438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.517707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.517738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.517862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.517894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.518081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.518113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.518288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.518319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.518603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.518634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.625 [2024-11-20 12:37:38.518745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.625 [2024-11-20 12:37:38.518776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.625 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.518965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.518998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.519174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.519206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.519383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.519414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.519585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.519617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.519860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.519892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.520160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.520193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.520371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.520404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.520613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.520645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.626 [2024-11-20 12:37:38.520773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.626 [2024-11-20 12:37:38.520804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.626 qpair failed and we were unable to recover it. 00:27:55.627 [2024-11-20 12:37:38.520983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-11-20 12:37:38.521016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-11-20 12:37:38.521153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-11-20 12:37:38.521184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-11-20 12:37:38.521310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-11-20 12:37:38.521342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-11-20 12:37:38.521534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.627 [2024-11-20 12:37:38.521564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.627 qpair failed and we were unable to recover it. 00:27:55.627 [2024-11-20 12:37:38.521850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.521880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.522011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.522043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.522235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.522266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.522389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.522420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.522679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.522709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.522880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.522910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.523047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.628 [2024-11-20 12:37:38.523079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.628 qpair failed and we were unable to recover it. 00:27:55.628 [2024-11-20 12:37:38.523204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.523235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.523420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.523450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.523571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.523608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.523865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.523897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.524085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.524117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.524297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.524329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.524453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.524483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.629 [2024-11-20 12:37:38.524729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.629 [2024-11-20 12:37:38.524760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.629 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.524996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.525030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.525204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.525236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.525369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.525400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.525531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.525562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.525669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.525701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.525929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.525969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.526150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.526181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.526306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.526337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 602160 Killed "${NVMF_APP[@]}" "$@" 00:27:55.630 [2024-11-20 12:37:38.526480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.630 [2024-11-20 12:37:38.526514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.630 qpair failed and we were unable to recover it. 00:27:55.630 [2024-11-20 12:37:38.526620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-11-20 12:37:38.526651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-11-20 12:37:38.526834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-11-20 12:37:38.526864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-11-20 12:37:38.527048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-11-20 12:37:38.527081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:55.631 [2024-11-20 12:37:38.527255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-11-20 12:37:38.527287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:55.631 [2024-11-20 12:37:38.527535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-11-20 12:37:38.527566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.631 [2024-11-20 12:37:38.527746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.631 [2024-11-20 12:37:38.527777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.631 qpair failed and we were unable to recover it. 00:27:55.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:55.632 [2024-11-20 12:37:38.527918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.527959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-11-20 12:37:38.528083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.528115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.632 [2024-11-20 12:37:38.528221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.528252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-11-20 12:37:38.528448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.528482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-11-20 12:37:38.528678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.528709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-11-20 12:37:38.528922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.528965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-11-20 12:37:38.529090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.529119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.632 qpair failed and we were unable to recover it. 00:27:55.632 [2024-11-20 12:37:38.529246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.632 [2024-11-20 12:37:38.529276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.529465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.529495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.529634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.529666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.529862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.529893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.530039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.530074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.530258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.530289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.530470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.530502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.633 qpair failed and we were unable to recover it. 00:27:55.633 [2024-11-20 12:37:38.530670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.633 [2024-11-20 12:37:38.530701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.530825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.530857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.530988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.531022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.531144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.531175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.531306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.531338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.531525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.531556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.531674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.531704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.531885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.531915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.532051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.634 [2024-11-20 12:37:38.532202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.634 [2024-11-20 12:37:38.532234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.634 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.532384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.532413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.532536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.532566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.532674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.532704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.532885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.532913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.533148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.533179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.533284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.533312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.533578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.533609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.533790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.533820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.534021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.534054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.635 [2024-11-20 12:37:38.534168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.635 [2024-11-20 12:37:38.534200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.635 qpair failed and we were unable to recover it. 00:27:55.636 [2024-11-20 12:37:38.534309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.534340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-11-20 12:37:38.534465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.534496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-11-20 12:37:38.534686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-11-20 12:37:38.534909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-11-20 12:37:38.535071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.535102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=602925 00:27:55.636 [2024-11-20 12:37:38.535278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.535311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 [2024-11-20 12:37:38.535434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.535465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 602925 00:27:55.636 [2024-11-20 12:37:38.535593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.636 [2024-11-20 12:37:38.535623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.636 qpair failed and we were unable to recover it. 00:27:55.636 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:55.637 [2024-11-20 12:37:38.535864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-11-20 12:37:38.535897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 602925 ']' 00:27:55.637 [2024-11-20 12:37:38.536081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-11-20 12:37:38.536115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-11-20 12:37:38.536236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-11-20 12:37:38.536268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.637 [2024-11-20 12:37:38.536390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-11-20 12:37:38.536422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-11-20 12:37:38.536551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-11-20 12:37:38.536582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.637 [2024-11-20 12:37:38.536683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.637 [2024-11-20 12:37:38.536714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.637 qpair failed and we were unable to recover it. 00:27:55.637 [2024-11-20 12:37:38.536898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.638 [2024-11-20 12:37:38.536931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-11-20 12:37:38.537075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-11-20 12:37:38.537106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.638 [2024-11-20 12:37:38.537234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-11-20 12:37:38.537266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-11-20 12:37:38.537447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.638 [2024-11-20 12:37:38.537479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-11-20 12:37:38.537658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-11-20 12:37:38.537691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-11-20 12:37:38.537811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.638 [2024-11-20 12:37:38.537842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.638 qpair failed and we were unable to recover it. 00:27:55.638 [2024-11-20 12:37:38.538024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-11-20 12:37:38.538056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-11-20 12:37:38.538258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-11-20 12:37:38.538290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-11-20 12:37:38.538407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-11-20 12:37:38.538438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-11-20 12:37:38.538642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-11-20 12:37:38.538674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.639 [2024-11-20 12:37:38.538911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.639 [2024-11-20 12:37:38.538943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.639 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.539132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.539165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.539315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.539346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.539462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.539494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.539664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.539695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.539815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.539845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.539963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.539996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.540166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.540204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.540309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.540339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.540542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.540575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.540756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.540788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.540994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.541031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.640 qpair failed and we were unable to recover it. 00:27:55.640 [2024-11-20 12:37:38.541152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.640 [2024-11-20 12:37:38.541182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.541307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.541338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.541450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.541482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.541673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.541704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.541892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.541924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.542057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.542089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.542201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.542232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.542424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.542455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.542569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.542600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.542778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.542810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.542980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.543013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.543133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.543164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.543401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.543432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.543563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.543594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.543768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.543799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.543916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.543957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.544066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.544097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.544225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.544257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.544364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.544396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.544608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.544640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.544757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.544788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.544905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.544935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.545070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.545102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.545232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.545264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.545508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.545539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.545648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.545681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.545803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.545833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.546019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.546052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.546296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.546328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.546563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.546594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.546715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.641 [2024-11-20 12:37:38.546925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.641 [2024-11-20 12:37:38.546967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.641 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.547077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.547109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.547233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.547265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.547380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.547412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.547594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.547630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.547760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.547792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.547917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.547957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.548063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.548095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.548263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.548295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.548412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.548443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.548622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.548652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.548769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.548801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.548997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.549135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.549286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.549418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.549560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.549765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.549929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.549971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.550078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.550108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.550236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.550267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.550443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.550475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.550672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.550816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.550847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.551025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.551057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.551166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.551196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.551377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.551407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.551592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.551622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.551727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.551758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.551956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.642 [2024-11-20 12:37:38.551990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.642 qpair failed and we were unable to recover it. 00:27:55.642 [2024-11-20 12:37:38.552121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.552153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.552272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.552303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.552544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.552574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.552835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.552867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.552994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.553027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.553146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.553177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.553295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.553326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.553497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.553528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.553755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.553785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.553903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.553933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.554130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.554162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.554287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.554318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.554420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.554451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.554628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.554659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.554834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.554871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.555044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.555078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.555318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.555351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.555543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.555573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.555748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.555779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.555904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.555935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.556086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.556290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.556429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.556565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.556703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.556838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.556987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.557020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.557204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.557235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.557498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.557529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.557636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.557667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.557871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.557903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.558147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.558180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.558465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.558496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.558597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.558628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.558852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.558996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.559029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.559213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.559244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.559374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.559404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.559676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.559706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.559819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.559849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.559965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.559996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.643 [2024-11-20 12:37:38.560163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.643 [2024-11-20 12:37:38.560235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.643 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.560546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.560584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.560835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.560867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.561084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.561118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.561302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.561335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.561469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.561500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.561676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.561708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.561882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.561915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.562131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.562165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.562362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.562396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.562523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.562553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.562821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.562855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.562974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.563008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.563181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.563213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.563359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.563393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.563506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.563538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.563718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.563750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.563935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.563980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.564081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.564112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.564232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.564263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.564460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.564491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.564675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.564707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.564886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.564918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.565268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.565336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.565475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.565510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.565705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.565738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.565898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.566163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.566202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.566348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.566379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.566644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.566675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.566814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.566847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.567022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.567055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.567242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.567275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.567382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.567413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.567519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.567550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.567671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.567702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.567885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.567918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.568173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.568207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.568385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.568416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.568633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.568826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.568876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.569008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.569041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.569214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.644 [2024-11-20 12:37:38.569245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.644 qpair failed and we were unable to recover it. 00:27:55.644 [2024-11-20 12:37:38.569435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.569466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.569636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.569668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.569855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.569885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.570007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.570041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.570186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.570217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.570457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.570488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.570604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.570635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.570738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.570770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.571012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.571045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.571248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.571280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.571477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.571508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.571629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.571661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.571883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.571913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.572202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.572235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.572456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.572488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.572615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.572646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.572847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.572880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.573065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.573097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.573279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.573311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.573495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.573526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.573707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.573738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.573865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.573895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.574147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.574179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.574311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.574342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.574497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.574566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.574712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.574746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.574940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.574992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.575104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.575136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.575353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.575385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.575654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.575685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.575802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.575834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.575937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.575982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.576198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.576230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.576399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.576431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.576648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.576825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.576856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.576982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.577016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.577190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.577231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.577337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.577369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.577581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.577611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.577784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.577815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.577926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.577964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.578085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.578118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.645 qpair failed and we were unable to recover it. 00:27:55.645 [2024-11-20 12:37:38.578253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.645 [2024-11-20 12:37:38.578283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.578469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.578502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.578605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.578636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.578752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.578783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.579027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.579061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.579237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.579269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.579526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.579740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.579771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.579892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.579924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.580162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.580195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.580368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.580399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.580584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.580616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.580801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.580833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.580972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.581007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.581193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.581225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.581409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.581440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.581615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.581645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.581852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.581882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.582067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.582099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.582227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.582259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.582448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.582479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.582667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.582704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.582884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.582918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.583119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.583155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.583343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.583373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.583483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.583515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.583648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.583679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.583871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.583904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b9[2024-11-20 12:37:38.583897] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:27:55.646 0 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.583945] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.646 [2024-11-20 12:37:38.584046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.584079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.584277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.584307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.584550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.584579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.584761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.584790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.585004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.585042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.585229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.585267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.585469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.585500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.585703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.585735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.585879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.585910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.646 qpair failed and we were unable to recover it. 00:27:55.646 [2024-11-20 12:37:38.586138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.646 [2024-11-20 12:37:38.586172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.586306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.586337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.586509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.586541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.586737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.586769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.586902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.586933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.587139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.587172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.587271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.587304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.587452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.587485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.587612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.587648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.587922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.587965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.588127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.588159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.588340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.588371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.588546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.588577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.588708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.588739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.588857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.588888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.589078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.589111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.589323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.589354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.589490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.589522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.589701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.589732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.589840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.589871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.589992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.590025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.590229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.590260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.590381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.590414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.590559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.590590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.590716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.590747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.590849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.590880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.590990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.591023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.591206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.591237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.591361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.591394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.591506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.591537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.591665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.591697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.591875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.591906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.592039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.592073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.592289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.592321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.592460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.592490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.592604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.592635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.592818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.592856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.593142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.593176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.593361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.593394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.593521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.593553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.593678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.593708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.593906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.593937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.594057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.594088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.647 [2024-11-20 12:37:38.594274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.647 [2024-11-20 12:37:38.594306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.647 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.594425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.594455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.594578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.594610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.594797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.594829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.595069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.595101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.595342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.595374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.595565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.595610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.595824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.595856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.596115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.596147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.596385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.596417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.596533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.596564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.596676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.596706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.648 qpair failed and we were unable to recover it. 00:27:55.648 [2024-11-20 12:37:38.596888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.648 [2024-11-20 12:37:38.596921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.597118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.597150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.597334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.597366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.597629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.597660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.597794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.598062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.598094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.598355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.598388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.598561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.598593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.598728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.598760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.598879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.598912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.599165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.599198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.599380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.599410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.599528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.599559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.649 qpair failed and we were unable to recover it. 00:27:55.649 [2024-11-20 12:37:38.599738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.649 [2024-11-20 12:37:38.599771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.599893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.599925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.600132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.600164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.600356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.600387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.600595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.600626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.600762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.600793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.601103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.601136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.601404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.601436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.601637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.601674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.601851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.601883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.602068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.602101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.602346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.602383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.602516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.602549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.602654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.602687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.602797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.602828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.603000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.603034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.603225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.603256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.603498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.603529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.603731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.603762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.604032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.604065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.604187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.604225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.604398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.604429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.604633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.604664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.604850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.604880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.605057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.605090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.605353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.605384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.605561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.605592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.605768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.605799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.605975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.606007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.606208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.606241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.606471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.606501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.606739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.606769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.606890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.606922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.607115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.607147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.607389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.607421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.607570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.607830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.607864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.608054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.608087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.608208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.608241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.608462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.608495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.608683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.608715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.608904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.608936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.609125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.609158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.609270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.609301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.609424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.650 [2024-11-20 12:37:38.609456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.650 qpair failed and we were unable to recover it. 00:27:55.650 [2024-11-20 12:37:38.609632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.609665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.609839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.609871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.610146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.610179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.610344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.610376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.610565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.610598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.610799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.610833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.611051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.611085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.611276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.611308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.611548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.611579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.611759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.611792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.611899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.611931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.612065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.612102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.612289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.612321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.612511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.612543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.612669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.612701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.612883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.612913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.613093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.613125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.613370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.613406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.613538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.613571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.613751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.613784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.613999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.614033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.614327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.614361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.614495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.614527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.614778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.614810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.615109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.615143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.615383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.615416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.615543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.615576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.615681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.615713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.615818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.615852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.616037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.616071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.616184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.616216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.616461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.616494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.616680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.616713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.616938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.616985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.617185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.617217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.617455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.617487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.617724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.617757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.617987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.618258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.618292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.618481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.618513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.651 [2024-11-20 12:37:38.618695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.651 [2024-11-20 12:37:38.618728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.651 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.618991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.619026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.619199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.619232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.619389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.619562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.619600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.619781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.619814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.620054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.620088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.620208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.620241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.620535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.620676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.620708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.620827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.620860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.621051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.621085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.621276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.621308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.621555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.621588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.621798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.621830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.621945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.621987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.622171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.622204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.622318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.622350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.622568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.622601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.622728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.622759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.622891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.622924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.623110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.623142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.623407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.623439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.623680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.623713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.623961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.623995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.624183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.624457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.624488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.624590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.624621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.624749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.624781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.624983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.625018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.625201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.625233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.625424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.625462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.625652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.625685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.625864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.625896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.626140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.652 [2024-11-20 12:37:38.626172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.652 qpair failed and we were unable to recover it. 00:27:55.652 [2024-11-20 12:37:38.626445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.626477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.626601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.626632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.626844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.626876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.627066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.627101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.627371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.627403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.627521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.627553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.627742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.627774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.627959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.627992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.628235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.628267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.628415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.628610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.628644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.628832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.628864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.629058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.629093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.629210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.629241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.629413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.629444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.629564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.629596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.629782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.629814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.630068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.630102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.630245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.630277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.630452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.630483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.630597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.630628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.630748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.630780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.630975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.631010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.653 [2024-11-20 12:37:38.631169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.653 qpair failed and we were unable to recover it. 00:27:55.653 [2024-11-20 12:37:38.631290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.631320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.631450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.631483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.631590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.631621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.631882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.631914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.632075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.632127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.632349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.632384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.632651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.632683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.632809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.632840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.633047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.633081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.633218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.633250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.633425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.633457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.633665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.633841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.633873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.634069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.634103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.634278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.634310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.634573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.634604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.634820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.634852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.635047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.635080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.635261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.635292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.635567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.635599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.635840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.635872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.636025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.636058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.636239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.636271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.636459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.636492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.636622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.636654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.636898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.636930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.637077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.637115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.637298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.637330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.637434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.637466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.637583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.637615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.637820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.637853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.637975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.638010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.654 [2024-11-20 12:37:38.638251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.654 [2024-11-20 12:37:38.638284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.654 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.638484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.638515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.638644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.638676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.638782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.638814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.638930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.638969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.639106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.639139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.639247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.639278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.639408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.639439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.639678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.639711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.639825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.639856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.640050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.640083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.640265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.640298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.640520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.640552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.640725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.640758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.640889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.640922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.641172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.641205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.641443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.641475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.641666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.641698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.641971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.642004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.642178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.642210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.642323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.642355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.642543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.642678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.642709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.642902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.642934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.643158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.643191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.643324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.643356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.643538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.643569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.643752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.643784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.643967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.644000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.644126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.644157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.644292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.644324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.644517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.644548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.644784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.644815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.644946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.644996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.645108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.645146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.645263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.645294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.645475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.645507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.645747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.645777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.655 [2024-11-20 12:37:38.645893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.655 [2024-11-20 12:37:38.645923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.655 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.646060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.646093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.646356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.646388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.646520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.646551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.646729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.646760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.646940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.646981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.647195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.647226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.647425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.647457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.647647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.647680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.647994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.648026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.648166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.648200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.648441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.648472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.648660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.648692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.648823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.648853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.648978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.649012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.649116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.649148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.649340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.649370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.649607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.649639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.649745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.649777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.649965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.649997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.650186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.650217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.650401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.650433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.650602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.650634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.650792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.650823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.650985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.651018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.651126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.651158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.651264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.651295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.651488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.651518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.651705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.651737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.651873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.651904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.652082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.652114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.652305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.652335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.652573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.652603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.652718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.652748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.652942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.653004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.653127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.653158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.653430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.653468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.653596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.653627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.653811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.653843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.653964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.653998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.654106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.654138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.654239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.654456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.654487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.654610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.654641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.654826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.654857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.655040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.655073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.655194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.655225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.655461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.655492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.655730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.655761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.655935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.655980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.656168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.656303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.656333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.656456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.656489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.656603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.656634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.656754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.656785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.656898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.656931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.657127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.657158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.657345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.657381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.657548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.657579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.657751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.657783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.657972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.656 [2024-11-20 12:37:38.658005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.656 qpair failed and we were unable to recover it. 00:27:55.656 [2024-11-20 12:37:38.658125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.658156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.658423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.658454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.658664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.658695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.658941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.658982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.659102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.659134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.659242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.659273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.659472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.659503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.659685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.659716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.659961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.660081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.660114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.660245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.660276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.660403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.660436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.660622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.657 [2024-11-20 12:37:38.660653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.657 qpair failed and we were unable to recover it. 00:27:55.657 [2024-11-20 12:37:38.660906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.934 [2024-11-20 12:37:38.660938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.934 qpair failed and we were unable to recover it. 00:27:55.934 [2024-11-20 12:37:38.661156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.934 [2024-11-20 12:37:38.661189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.661333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.661371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.661628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.661660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.661773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.661805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.661987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.662022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.662136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.662167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.662340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.662371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.662628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.662662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.662849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.662881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.663135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.663168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.663426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.663458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.663669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.663806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.663836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.664087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.664119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.664242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.664273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.664561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.664594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.664832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.664862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.664975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.665008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.665205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.665236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.665437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.665468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.665593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.665624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.665747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.665779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.665946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.665990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.666171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.666204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.666399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.666431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.666655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.666687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.666855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.666886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.667005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.667038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.667158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.667190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.667430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.667461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.667591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.935 [2024-11-20 12:37:38.667695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.667727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.667929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.667968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.668089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.668121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.668300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.668331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.668504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.668536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.668776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.668806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.669058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.935 [2024-11-20 12:37:38.669090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.935 qpair failed and we were unable to recover it. 00:27:55.935 [2024-11-20 12:37:38.669276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.669309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.669524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.669558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.669693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.669852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.669884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.670106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.670141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.670273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.670304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.670488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.670520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.670697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.670730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.670850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.670881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.671058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.671091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.671273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.671306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.671522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.671554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.671744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.671777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.672018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.672052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.672164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.672196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.672503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.672535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.672729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.672762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.673000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.673041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.673170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.673203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.673342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.673375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.673494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.673526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.673637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.673670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.673858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.673890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.674115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.674149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.674312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.674519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.674551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.674667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.674698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.674813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.674845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.675037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.675071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.675199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.675232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.675426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.675457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.675643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.675675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.675882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.675913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.676047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.676080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.676276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.676309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.676549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.676581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.676690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.676722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.676829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.936 [2024-11-20 12:37:38.676861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.936 qpair failed and we were unable to recover it. 00:27:55.936 [2024-11-20 12:37:38.677002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.677036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.677156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.677187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.677447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.677480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.677621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.677654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.677915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.677955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.678221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.678254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.678503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.678573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.678938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.679005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.679140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.679174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.679301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.679333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.679530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.679562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.679743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.679775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.679958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.679991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.680183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.680215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.680396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.680428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.680624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.680655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.680896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.680928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.681078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.681111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.681285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.681318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.681628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.681816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.681848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.681982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.682016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.682130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.682162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.682349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.682381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.682552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.682583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.682699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.682731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.682904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.682935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.683151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.683183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.683292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.683322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.683423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.683455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.683649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.683680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.683854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.683886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.684126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.684159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.684360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.684392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.684522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.684725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.684756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.684892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.684924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.937 qpair failed and we were unable to recover it. 00:27:55.937 [2024-11-20 12:37:38.685112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.937 [2024-11-20 12:37:38.685144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.685259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.685290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.685468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.685499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.685631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.685661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.685901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.685932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.686072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.686104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.686280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.686311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.686511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.686543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.686670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.686701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.686916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.686971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.687156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.687189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.687312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.687343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.687588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.687621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.687739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.687769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.687902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.687932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.688187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.688220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.688424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.688457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.688576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.688608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.688731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.688763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.688944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.688992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.689178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.689210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.689340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.689371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.689611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.689643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.689902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.689935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.690066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.690098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.690333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.690365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.690626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.690658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.690849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.690880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.691085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.691117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.691341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.691373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.691545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.691578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.691698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.691732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.691852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.691883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.692046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.692176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.938 [2024-11-20 12:37:38.692206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.938 qpair failed and we were unable to recover it. 00:27:55.938 [2024-11-20 12:37:38.692331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.692363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.692479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.692516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.692755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.692788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.692981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.693013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.693206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.693238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.693443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.693671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.693703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.693895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.693926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.694050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.694085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.694207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.694239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.694357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.694388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.694575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.694608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.694724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.694755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.694993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.695027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.695288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.695320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.695461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.695493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.695670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.695710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.695975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.696010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.696269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.696302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.696610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.696643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.696889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.696922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.697132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.697165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.697363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.697395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.697653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.697686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.697868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.697900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.698097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.698130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.698325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.698356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.698483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.698515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.698703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.698745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.698921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.698960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.699141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.939 [2024-11-20 12:37:38.699173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.939 qpair failed and we were unable to recover it. 00:27:55.939 [2024-11-20 12:37:38.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.699331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.699504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.699536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.699667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.699697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.699813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.699845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.699966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.700001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.700134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.700165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.700404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.700435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.700617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.700649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.700769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.700801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.701068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.701101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.701283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.701315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.701536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.701569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.701835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.701866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.702124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.702157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.702422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.702454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.702628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.702661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.702786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.702817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.703021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.703054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.703223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.703256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.703375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.703407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.703662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.703694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.703866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.703899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.704095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.704128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.704259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.704291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.704462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.704500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.704712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.704744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.705013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.705046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.705308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.705341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.705566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.705599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.705795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.705827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.706072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.706104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.706295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.706327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.940 [2024-11-20 12:37:38.706498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.940 [2024-11-20 12:37:38.706529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.940 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.706766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.706798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.706993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.707026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.707257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.707288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.707457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.707650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.707683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.707910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.707944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.708079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.708111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.708301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.708334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.708525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.708557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.708660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.708692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.708828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.708862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.708984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.709018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.709262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.709293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.709567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.709730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.709763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.710012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.710046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.710268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.710300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.710421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.710452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 [2024-11-20 12:37:38.710462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.710488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.941 [2024-11-20 12:37:38.710499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.941 [2024-11-20 12:37:38.710506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.941 [2024-11-20 12:37:38.710512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.941 [2024-11-20 12:37:38.710646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.710676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.710959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.710994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.711206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.711238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.711463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.711497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.711676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.711707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.711875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.711907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.712107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.712143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.712109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:55.941 [2024-11-20 12:37:38.712260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.712201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:55.941 [2024-11-20 12:37:38.712291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.712318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:55.941 [2024-11-20 12:37:38.712319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:55.941 [2024-11-20 12:37:38.712531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.941 [2024-11-20 12:37:38.712562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.941 qpair failed and we were unable to recover it. 00:27:55.941 [2024-11-20 12:37:38.712676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.712707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.712972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.713011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.713302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.713335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.713506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.713538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.713726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.713758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.713946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.713992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.714167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.714200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.714326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.714358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.714545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.714576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.714760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.714793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.714985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.715152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.715183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.715429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.715462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.715578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.715609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.715811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.715842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.716031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.716069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.716257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.716288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.716533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.716565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.716739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.716771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.716981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.717016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.717160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.717191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.717367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.717400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.717524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.717556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.717771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.717804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.717940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.717982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.718192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.718224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.718347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.718379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.718664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.718697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.718890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.718922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.942 [2024-11-20 12:37:38.719130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.942 [2024-11-20 12:37:38.719164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.942 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.719408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.719440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.719563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.719596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.719722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.719754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.719937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.719979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.720241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.720273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.720408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.720439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.720558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.720590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.720827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.720858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.720981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.721016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.721161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.721194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.721409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.721441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.721556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.721589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.721783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.721814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.722014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.722047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.722255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.722287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.722491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.722524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.722761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.722793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.723034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.723067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.723280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.723314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.723445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.723476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.723598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.723630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.723738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.723771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.723960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.723994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.724136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.724169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.724340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.724373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.724483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.724516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.724727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.724807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.725028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.725077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.725288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.943 [2024-11-20 12:37:38.725320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.943 qpair failed and we were unable to recover it. 00:27:55.943 [2024-11-20 12:37:38.725447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.725478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.725742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.725774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.725992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.726026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.726216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.726249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.726366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.726397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.726662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.726694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.726819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.726851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.726979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.727014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.727308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.727340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.727471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.727502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.727739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.727780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.728037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.728073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.728199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.728230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.728420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.728451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.728630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.728661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.728885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.728917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.729123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.729163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.729356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.729389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.729527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.729559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.729741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.729772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.729984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.730016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.730199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.730230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.730355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.730384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.730576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.730609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.730882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.730915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.731137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.731175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.731304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.731336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.731559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.731590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.731845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.731877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.732057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.944 [2024-11-20 12:37:38.732091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.944 qpair failed and we were unable to recover it. 00:27:55.944 [2024-11-20 12:37:38.732276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.732517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.732550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.732657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.732689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.732963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.733189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.733220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.733394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.733426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.733615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.733647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.733771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.733807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.733983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.734017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.734229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.734262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.734482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.734514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.734630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.734662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.734845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.734878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.735055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.735087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.735259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.735291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.735516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.735548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.735676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.735709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.735865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.735972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.736005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.736106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.736138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.736314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.736345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.736541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.736573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.736745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.736776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.736926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.736967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.737083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.737114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.945 [2024-11-20 12:37:38.737236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.945 [2024-11-20 12:37:38.737267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.945 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.737535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.737567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.737706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.737738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.737906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.737938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.738139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.738351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.738382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.738571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.738605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.738875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.738910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.739095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.739130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.739275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.739310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.739617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.739652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.739892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.739924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.740126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.740159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.740289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.740320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.740608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.740641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.740772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.740804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.740995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.741030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.741208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.741240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.741479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.741510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.741752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.741783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.742040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.742074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.742267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.742301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.742531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.742795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.742831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.742979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.743014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.743144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.743176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.743407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.743443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.743636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.946 [2024-11-20 12:37:38.743668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.946 qpair failed and we were unable to recover it. 00:27:55.946 [2024-11-20 12:37:38.743843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.743874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.744040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.744073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.744203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.744236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.744356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.744387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.744666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.744698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.744945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.745000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.745141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.745172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.745298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.745331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.745622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.745655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.745770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.745975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.746009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.746263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.746294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.746421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.746453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.746578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.746610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.746794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.746826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.747089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.747121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.747257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.747289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.747557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.747667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.747698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.747825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.747857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.748053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.748086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.748269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.748301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.748414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.748446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.748686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.748717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.748901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.748933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.749112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.749144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.749258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.749290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.749395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.749426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.749571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.749603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.749843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.749873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.750110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.750144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.947 [2024-11-20 12:37:38.750335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.947 [2024-11-20 12:37:38.750366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.947 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.750628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.750661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.750897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.750929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.751063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.751101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.751282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.751313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.751436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.751468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.751674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.751706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.751840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.751873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.752165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.752199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.752319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.752351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.752573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.752605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.752817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.752851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.752963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.752996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.753111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.753143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.753330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.753363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.753640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.753672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.753854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.753886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.754091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.754124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.754255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.754287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.754525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.754557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.754746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.754778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.754972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.755005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.755189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.755221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.755458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.755488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.755684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.755715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.755840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.755871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.756048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.756081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.756347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.756380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.948 qpair failed and we were unable to recover it. 00:27:55.948 [2024-11-20 12:37:38.756640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.948 [2024-11-20 12:37:38.756673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.756804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.756835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.757025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.757059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.757180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.757212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.757382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.757414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.757605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.757637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.757875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.757908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.758052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.758085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.758272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.758303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.758574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.758605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.758783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.758814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.758957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.758992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.759247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.759278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.759541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.759573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.759750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.759781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.759968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.760009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.760203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.760236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.760342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.760372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.760557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.760588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.760829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.760861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.761077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.761110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.761301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.761333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.761502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.761533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.761709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.761741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.761905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.761936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.762129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.762162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.762421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.762452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.762631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.762662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.762835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.762867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.763058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.763091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.763306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.763338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.949 qpair failed and we were unable to recover it. 00:27:55.949 [2024-11-20 12:37:38.763589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.949 [2024-11-20 12:37:38.763620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.763839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.763870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.763987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.764020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.764203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.764234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.764401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.764432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.764600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.764632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.764892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.764925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.765075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.765108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.765372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.765404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.765591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.765623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.765750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.765781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.765973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.766007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.766197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.766230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.766341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.766568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.766600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.766841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.766879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.767086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.767120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.767299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.767333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.767452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.767483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.767582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.767614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.767797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.767829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.767955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.767989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.768222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.768254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.768441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.768473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.768657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.768695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.768819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.768851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.768992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.769025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.769200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.769232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.769405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.769436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.769611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.769642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.769750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.769781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.769968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.950 [2024-11-20 12:37:38.770001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.950 qpair failed and we were unable to recover it. 00:27:55.950 [2024-11-20 12:37:38.770241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.770272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.770481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.770513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.770694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.770725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.770979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.771011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.771138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.771169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.771286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.771317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.771511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.771542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.771733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.771764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.771897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.771928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.772056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.772087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.772259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.772290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.772485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.772516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.772740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.772771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.772899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.772929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.773121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.773153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.773393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.773424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.773546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.773577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.773682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.773714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.773838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.773868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.774115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.774148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.774321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.774353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.774527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.774557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.774755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.774786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.774987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.775020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.775274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.951 [2024-11-20 12:37:38.775305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.951 qpair failed and we were unable to recover it. 00:27:55.951 [2024-11-20 12:37:38.775430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.775460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.775588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.775620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.775757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.775787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.775967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.775998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.776215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.776247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.776438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.776470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.776581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.776612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.776747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.776785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.776983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.777016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.777218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.777529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.777560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.777749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.777781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.777991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.778023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.778208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.778240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.778438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.778467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.778708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.778739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.778922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.778986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.779174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.779205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.779440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.779469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.779660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.779691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.779897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.779928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.780135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.780167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.780378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.780408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.780593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.780624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.780804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.780835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.780959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.780991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.781093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.781124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.781246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.781277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.781449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.781479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.952 [2024-11-20 12:37:38.781719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.952 [2024-11-20 12:37:38.781750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.952 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.781923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.781964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.782224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.782255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.782520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.782550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.782817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.782849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.783044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.783078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.783199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.783230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.783487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.783518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.783655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.783687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.783879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.783910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.784139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.784172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.784301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.784332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.784450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.784482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.784654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.784685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.784806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.784838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.785020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.785052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.785238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.785268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.785534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.785566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.785747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.785783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.785967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.785999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.786173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.786204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.786307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.786338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.786459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.786490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.786729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.786759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.786942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.786987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.787101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.787133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.787263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.787293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.787413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.787444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.787614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.787646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.787884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.953 [2024-11-20 12:37:38.787915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.953 qpair failed and we were unable to recover it. 00:27:55.953 [2024-11-20 12:37:38.788057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.788089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.788199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.788230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.788337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.788368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.788550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.788582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.788752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.788783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.788912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.788944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.789126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.789157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.789363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.789395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.789517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.789548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.789670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.789701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.789871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.789902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.790083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.790116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.790231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.790263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.790372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.790404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.790571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.790602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.790870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.790901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.791021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.791053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.791159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.791189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.791464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.791495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.791708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.791738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.791927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.791979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.792165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.792196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.792382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.792413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.792597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.792627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.792750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.792781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.792999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.793031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.793204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.793234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.793414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.793447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.793574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.793610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.793781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.793812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.793995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.954 [2024-11-20 12:37:38.794027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.954 qpair failed and we were unable to recover it. 00:27:55.954 [2024-11-20 12:37:38.794234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.794266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.794381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.794412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.794678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.794709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.794824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.794855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.795052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.795083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.795253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.795284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.795423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.795454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.795694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.795723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.795848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.795879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.796162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.796196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.796367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.796397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.796612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.796644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.796881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.796912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.797039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.797072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.797316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.797348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.797513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.797545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.797731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.797763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.797967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.797999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.798267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.798477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.798509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.798798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.798829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.799088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.799119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.799242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.799273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.799470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.799502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.799756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.799822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.800020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.800080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.800369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.800405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.800596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.800630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.800891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.800923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.801129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.801163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.801451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.801482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.801728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.955 [2024-11-20 12:37:38.801759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.955 qpair failed and we were unable to recover it. 00:27:55.955 [2024-11-20 12:37:38.801962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.801996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.802106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.802138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.802316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.802348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.802607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.802638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.802810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.802841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.803020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.803053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.803242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.803274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.803500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.803532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.803767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.803799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.804039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.804072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.804255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.804288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.804422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.804456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.804694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.804726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.804909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.804940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.805132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.805166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.956 [2024-11-20 12:37:38.805278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.805312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.805484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.805516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:55.956 [2024-11-20 12:37:38.805687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.805720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.956 [2024-11-20 12:37:38.805982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.806019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.806197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.806228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.806357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.806389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.956 [2024-11-20 12:37:38.806583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.956 [2024-11-20 12:37:38.806617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.956 qpair failed and we were unable to recover it. 00:27:55.956 [2024-11-20 12:37:38.806858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.806889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.807013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.807045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.807289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.807321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.807527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.807558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.807825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.807857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.808033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.808067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.808308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.808341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.808579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.808611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.808849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.808881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.809083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.809117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.809303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.809334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.809534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.809566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.809819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.809852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.810114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.810147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.810333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.810366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.810552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.810585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.810872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.810906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.811039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.811072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.811293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.811561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.811593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.811724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.811755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.812027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.812062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.812201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.812239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.812426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.812458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.812675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.812706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.812961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.812995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.813131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.813164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.813424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.813455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.957 [2024-11-20 12:37:38.813572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.957 [2024-11-20 12:37:38.813603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.957 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.813809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.813841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.814073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.814346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.814378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.814557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.814588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.814856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.814888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.815197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.815230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.815418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.815464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.815640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.815672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.815865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.815897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.816161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.816194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.816376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.816409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.816585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.816617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.816823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.816855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.816988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.817022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.817151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.817183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.817355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.817387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.817520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.817553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.817770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.817803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.818067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.818102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.818224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.818256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.818386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.818418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.818595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.818626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.818840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.818872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.819007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.819041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.819225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.819256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.819462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.819494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.819675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.819708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.819904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.819935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.820079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.820110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.820230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.820262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.820450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.820483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.958 qpair failed and we were unable to recover it. 00:27:55.958 [2024-11-20 12:37:38.820686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.958 [2024-11-20 12:37:38.820718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.820981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.821016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.821182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff8a0000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.821376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.821412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.821625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.821657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.821877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.821909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.822171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.822203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.822382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.822412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.822709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.822741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.822969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.823001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.823187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.823218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.823403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.823434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.823561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.823591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.823839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.823871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.824132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.824164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.824432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.824624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.824655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.824913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.824944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.825084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.825116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.825302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.825332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.825611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.825642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.825907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.825939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.826145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.826176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.826359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.826390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.826512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.826543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.826722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.826754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.826938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.826978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.827163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.827195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.827385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.827418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.827562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.827593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.827801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.827832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.828047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.959 [2024-11-20 12:37:38.828081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.959 qpair failed and we were unable to recover it. 00:27:55.959 [2024-11-20 12:37:38.828369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.828401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.828582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.828613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.828782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.828813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.828992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.829025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.829221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.829252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.829517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.829549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.829740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.829772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.829887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.829919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.830139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.830176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.830316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.830348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.830486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.830524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.830639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.830671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.830937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.830985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.831171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.831202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.831458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.831491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.831631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.831663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.831921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.831966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.832156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.832189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.832331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.832365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.832492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.832525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.832646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.832678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.832878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.832911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.833061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.833094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.833287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.833319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.833452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.833485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.833604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.833636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.833827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.833860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.834005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.834040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.834238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.834271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.834446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.834480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.834696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.834729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.834915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.834958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.835100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.835133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.835372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.835404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.960 qpair failed and we were unable to recover it. 00:27:55.960 [2024-11-20 12:37:38.835579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.960 [2024-11-20 12:37:38.835610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.835852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.835884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.836087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.836121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.836309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.836347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.836566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.836598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.836857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.836888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.837081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.837114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.837289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.837320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.837520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.837552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.837741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.837774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.837989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.838024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.838143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.838175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.838318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.838350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.838482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.838515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.838752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.838785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.838995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.839029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.839292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.839325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.839455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.839487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.839616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.839647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.839885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.839917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.840084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.840121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.840310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.840342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.840515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.840547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.840720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.840751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.841033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.841066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.841259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.841290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.841600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.841632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.841874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.841905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.842124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.842156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.842338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.842370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.961 [2024-11-20 12:37:38.842613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.842646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.961 [2024-11-20 12:37:38.842852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.961 [2024-11-20 12:37:38.842884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.961 qpair failed and we were unable to recover it. 00:27:55.962 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:55.962 [2024-11-20 12:37:38.843016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.843048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.843256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.962 [2024-11-20 12:37:38.843288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.843483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.843515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.962 [2024-11-20 12:37:38.843706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.843737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.844019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.844050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.844319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.844351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.844490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.844522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.844713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.844743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.844981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.845013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.845296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.845327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.845465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.845497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.845793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.845824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.846077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.846110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.846414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.846719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.846750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.846940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.846987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.847179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.847211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.847398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.847430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.847696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.847726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.847852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.847883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.848106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.848139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.848332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.848363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.848643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.962 [2024-11-20 12:37:38.848675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.962 qpair failed and we were unable to recover it. 00:27:55.962 [2024-11-20 12:37:38.848893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.848931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.849207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.849240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.849447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.849478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.849763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.849794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.850076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.850109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.850298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.850330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.850570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.850601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.850819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.850850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.851066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.851099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.851301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.851351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.851506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.851537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.851676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.851707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.851969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.852001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.852239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.852278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.852484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.852515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.852778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.852809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.853080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.853111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.853334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.853365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.853620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.853651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.853830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.853861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.854115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.854146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.854327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.854358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.854675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.854706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.855002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.855034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.855224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.855256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.855546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.855577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.855712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.855742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.855863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.855895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.856100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.856130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.856316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.856347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.856480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.856511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.856771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.856801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.857011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.857042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.963 [2024-11-20 12:37:38.857244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.963 [2024-11-20 12:37:38.857274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.963 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.857473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.857504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.857746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.857777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.857969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.857999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.858200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.858230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.858418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.858448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.858719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.858750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.858939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.858992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.859139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.859171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.859315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.859346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.859473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.859504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.859780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.859812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.860002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.860036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.860224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.860256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.860469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.860501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.860768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.860800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.861064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.861098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.861342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.861375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.861568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.861600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.861837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.861869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.862059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.862092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.862370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.862402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.862671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.862704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.862890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.862922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.863098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.863131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.863392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.863424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.863574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.863606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.863778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.863810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.863999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.864033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.864231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.964 [2024-11-20 12:37:38.864264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-20 12:37:38.864539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.864571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.864765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.864797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.865081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.865114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.865287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.865320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.865535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.865572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.865835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.865866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.866115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.866148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.866250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.866281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.866564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.866596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.866793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.866825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.867064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.867097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.867289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.867321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.867508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.867540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.867777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.867808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.868052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.868085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.868321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.868354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.868639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.868671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.868942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.868982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.869177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.869210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.869350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.869381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.869638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.869670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.869966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.870000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.870120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.870151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.870338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.870369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.870507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.870539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.870782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.870813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.871120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.871153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.871363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.871395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.871646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.871677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.871937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.871979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.872105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.965 [2024-11-20 12:37:38.872136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-20 12:37:38.872373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.872410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.872657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.872688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.872945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.873181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.873212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.873448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.873479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.873687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.873718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.873890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.873921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.874123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.874156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.874395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.874427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.874605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.874637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.874895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.874927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.875122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.875154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.875415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.875447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.875730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.875762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.875901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.875933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.876209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.876241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.876373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.876404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.876662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.876693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.876872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.876904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.877204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.877238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.877423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.877456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.877712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.877744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.877985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.878020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.878287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.878321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.878494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.878527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.878790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.879080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.879115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.879338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.879376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.879619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.966 [2024-11-20 12:37:38.879652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-20 12:37:38.879825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.879858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.880154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.880188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.880408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.880440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.880700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.880732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.880912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.880944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.881222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.881256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.881429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.881461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.881743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.881775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.882037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.882249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.882281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.882485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.882518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.882781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.882813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.883098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.883132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.883349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.883381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.883570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.883602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 Malloc0 00:27:55.967 [2024-11-20 12:37:38.883863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.883895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.884096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.884130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.884298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.884331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.967 [2024-11-20 12:37:38.884592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.884624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:55.967 [2024-11-20 12:37:38.884861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.884895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.885136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.885170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.967 [2024-11-20 12:37:38.885339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.885370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.967 [2024-11-20 12:37:38.885631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.885663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.885847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.885878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.886067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.886100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.886338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.886370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.886654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.886686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.886954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.886987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.887175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.887208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.887394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.887426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.967 [2024-11-20 12:37:38.887713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.967 [2024-11-20 12:37:38.887744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.967 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.887929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.887968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.888203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.888236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.888470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.888501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.888750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.888782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.889048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.889080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.889186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.889218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.889481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.889519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.889783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.889814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.890056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.890090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.890353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.890385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.890569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.890601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.890796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.890828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.891094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.891127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.891345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.968 [2024-11-20 12:37:38.891367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.891398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.891581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.891613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.891858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.891891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.892145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.892177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.892415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.892447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.892726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.892758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.893026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.893059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.893239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.893272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.893457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.893489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.893752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.893783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.893991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.894024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.894161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.894193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.894367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.894398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.968 qpair failed and we were unable to recover it. 00:27:55.968 [2024-11-20 12:37:38.894672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.968 [2024-11-20 12:37:38.894704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.894918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.894958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.895183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.895215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.895451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.895482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.895651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.895683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.895928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.896126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.896160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.896508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff894000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.896912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.896977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.897256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.897289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.897550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.897581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.897772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.897803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.898070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.898102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.898310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.898341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.898523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.898553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.898743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.898773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.898966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.899000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.899258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.899288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.969 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.969 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.969 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.969 [2024-11-20 12:37:38.901190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.901239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.901541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.901576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.901845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.901878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.902168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.902200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.902477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.902509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.902700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.902731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.902980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.903012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.903195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.903226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.903487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.903518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.903644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.903675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.903931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.903972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.904141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.904173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.904375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.969 [2024-11-20 12:37:38.904406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.969 qpair failed and we were unable to recover it. 00:27:55.969 [2024-11-20 12:37:38.904591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.904621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.904896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.904928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.905260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.905441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.905472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.905756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.905787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.906046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.906080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.906380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.906411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.906669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.906703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.906965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.906998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.907263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.907294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.907584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.907616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.907888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.907920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.970 [2024-11-20 12:37:38.908207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.908244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.970 [2024-11-20 12:37:38.908512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.908551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.908740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.908773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.970 [2024-11-20 12:37:38.908964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.908997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.970 [2024-11-20 12:37:38.909240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.909275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.909560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.909594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.909860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.909893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.910210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.910243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.910465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.910496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.910680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.910711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.910897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.910930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.911130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.911163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.911436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.911467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.911642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.911674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.911958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.911994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.912184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.912216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.912461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.912493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.970 [2024-11-20 12:37:38.912700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.970 [2024-11-20 12:37:38.912732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.970 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.912999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.913032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.913231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.913264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.913443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.913474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.913712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.913743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.914045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.914078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.914279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.914310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.914543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.914576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.914765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.914797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.915062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.915095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.915414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.915692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.915725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.915911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.915943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.971 [2024-11-20 12:37:38.916190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.916223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.916419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.916452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.971 [2024-11-20 12:37:38.916635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.916667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.971 [2024-11-20 12:37:38.916923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.916965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.971 [2024-11-20 12:37:38.917178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.917212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.917341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.917372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.917556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.917587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.917853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.917885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.918088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.918121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d57ba0 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.918415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.918450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.918691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.918723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.918982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.919015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.919254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.919286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.919499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.971 [2024-11-20 12:37:38.919531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff898000b90 with addr=10.0.0.2, port=4420 00:27:55.971 qpair failed and we were unable to recover it. 00:27:55.971 [2024-11-20 12:37:38.919556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.971 [2024-11-20 12:37:38.922025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.971 [2024-11-20 12:37:38.922148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.922190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.922212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.922232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.922285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.972 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.972 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.972 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.972 [2024-11-20 12:37:38.931971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.932082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.932124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.932148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.972 [2024-11-20 12:37:38.932169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.932220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 12:37:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 602267 00:27:55.972 [2024-11-20 12:37:38.941959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.942033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.942061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.942075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.942088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.942119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:38.951938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.952012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.952031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.952041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.952051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.952073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:38.961925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.961988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.962003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.962011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.962018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.962033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:38.971966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.972072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.972086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.972093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.972100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.972115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:38.981887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.981944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.981962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.981969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.981976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.981992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:38.991986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:38.992045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:38.992060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:38.992067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:38.992074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:38.992089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:39.002019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:39.002077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:39.002091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:39.002098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:39.002105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.972 [2024-11-20 12:37:39.002120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.972 qpair failed and we were unable to recover it. 00:27:55.972 [2024-11-20 12:37:39.012076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.972 [2024-11-20 12:37:39.012132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.972 [2024-11-20 12:37:39.012146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.972 [2024-11-20 12:37:39.012153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.972 [2024-11-20 12:37:39.012159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.973 [2024-11-20 12:37:39.012174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.973 qpair failed and we were unable to recover it. 00:27:55.973 [2024-11-20 12:37:39.022157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.973 [2024-11-20 12:37:39.022221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.973 [2024-11-20 12:37:39.022238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.973 [2024-11-20 12:37:39.022245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.973 [2024-11-20 12:37:39.022251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:55.973 [2024-11-20 12:37:39.022266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:55.973 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.032108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.032170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.032183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.032191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.234 [2024-11-20 12:37:39.032197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.234 [2024-11-20 12:37:39.032212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.042132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.042191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.042205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.042211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.234 [2024-11-20 12:37:39.042218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.234 [2024-11-20 12:37:39.042234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.052167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.052220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.052234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.052242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.234 [2024-11-20 12:37:39.052248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.234 [2024-11-20 12:37:39.052263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.062258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.062317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.062331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.062339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.234 [2024-11-20 12:37:39.062349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.234 [2024-11-20 12:37:39.062364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.072224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.072284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.072298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.072306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.234 [2024-11-20 12:37:39.072312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.234 [2024-11-20 12:37:39.072327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.082264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.082320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.082333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.082342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.234 [2024-11-20 12:37:39.082349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.234 [2024-11-20 12:37:39.082365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.234 qpair failed and we were unable to recover it. 00:27:56.234 [2024-11-20 12:37:39.092276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.234 [2024-11-20 12:37:39.092339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.234 [2024-11-20 12:37:39.092352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.234 [2024-11-20 12:37:39.092360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.092366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.092381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.102233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.102289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.102304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.102312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.102318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.102333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.112340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.112398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.112412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.112420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.112427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.112442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.122351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.122409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.122423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.122430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.122437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.122452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.132376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.132445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.132459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.132466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.132473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.132487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.142409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.142467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.142480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.142488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.142495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.142510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.152458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.152517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.152534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.152541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.152548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.152563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.162432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.162522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.162535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.162542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.162548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.162562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.172435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.172490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.172504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.172511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.172518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.172533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.182578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.182685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.182699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.182707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.182713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.182728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.235 [2024-11-20 12:37:39.192559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.235 [2024-11-20 12:37:39.192617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.235 [2024-11-20 12:37:39.192631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.235 [2024-11-20 12:37:39.192638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.235 [2024-11-20 12:37:39.192648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.235 [2024-11-20 12:37:39.192664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.235 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.202593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.202667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.202682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.202689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.202695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.202710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.212601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.212671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.212686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.212692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.212698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.212714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.222630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.222681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.222695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.222702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.222709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.222724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.232672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.232728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.232742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.232749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.232755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.232770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.242686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.242738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.242752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.242759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.242765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.242781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.252696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.252748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.252762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.252769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.252776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.252792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.262738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.262792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.262806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.262813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.262820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.262835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.272786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.272846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.272860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.272867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.272874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.272889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.282836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.282893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.282911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.282918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.282924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.282939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.292816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.292873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.292887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.292894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.292902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.292917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.302878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.302934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.302952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.236 [2024-11-20 12:37:39.302960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.236 [2024-11-20 12:37:39.302966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.236 [2024-11-20 12:37:39.302982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.236 qpair failed and we were unable to recover it. 00:27:56.236 [2024-11-20 12:37:39.312878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.236 [2024-11-20 12:37:39.312935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.236 [2024-11-20 12:37:39.312953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.237 [2024-11-20 12:37:39.312960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.237 [2024-11-20 12:37:39.312967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.237 [2024-11-20 12:37:39.312983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-11-20 12:37:39.322905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.237 [2024-11-20 12:37:39.322985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.237 [2024-11-20 12:37:39.323000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.237 [2024-11-20 12:37:39.323010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.237 [2024-11-20 12:37:39.323016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.237 [2024-11-20 12:37:39.323031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-11-20 12:37:39.332937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.237 [2024-11-20 12:37:39.332996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.237 [2024-11-20 12:37:39.333010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.237 [2024-11-20 12:37:39.333017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.237 [2024-11-20 12:37:39.333024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.237 [2024-11-20 12:37:39.333038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.237 [2024-11-20 12:37:39.342982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.237 [2024-11-20 12:37:39.343089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.237 [2024-11-20 12:37:39.343104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.237 [2024-11-20 12:37:39.343112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.237 [2024-11-20 12:37:39.343118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.237 [2024-11-20 12:37:39.343134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.237 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.353011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.353097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.353111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.353119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.353125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.353140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.363006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.363060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.363073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.363080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.363087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.363106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.373051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.373105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.373120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.373127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.373134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.373150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.383084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.383139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.383154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.383163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.383171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.383186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.393111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.393177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.393191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.393198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.393205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.393219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.403146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.403203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.403216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.403223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.403229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.403245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.413168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.413226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.413239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.413247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.413254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.413269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.423198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.423254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.423268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.423275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.423282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.423298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.433233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.433288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.433302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.433309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.433316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.433331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.443328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.443379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.443393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.443400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.443406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.443421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.453322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.453377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.453391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.453401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.453409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.453423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.463306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.463358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.463372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.498 [2024-11-20 12:37:39.463380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.498 [2024-11-20 12:37:39.463386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.498 [2024-11-20 12:37:39.463401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.498 qpair failed and we were unable to recover it. 00:27:56.498 [2024-11-20 12:37:39.473341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.498 [2024-11-20 12:37:39.473399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.498 [2024-11-20 12:37:39.473412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.473420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.473427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.473442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.483408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.483466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.483479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.483486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.483493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.483508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.493395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.493447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.493461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.493468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.493474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.493493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.503425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.503477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.503491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.503498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.503504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.503519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.513465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.513566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.513579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.513587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.513593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.513608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.523487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.523539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.523552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.523559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.523565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.523580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.533502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.533559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.533573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.533581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.533588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.533603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.543524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.543575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.543589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.543596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.543602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.543618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.553536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.553595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.553610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.553617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.553624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.553638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.563540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.563597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.563611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.563618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.563624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.563639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.573622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.573678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.573693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.573701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.573708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.573724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.583641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.583703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.583721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.583729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.583735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.583750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.593667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.593721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.593736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.593743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.499 [2024-11-20 12:37:39.593750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.499 [2024-11-20 12:37:39.593765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.499 qpair failed and we were unable to recover it. 00:27:56.499 [2024-11-20 12:37:39.603699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.499 [2024-11-20 12:37:39.603752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.499 [2024-11-20 12:37:39.603767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.499 [2024-11-20 12:37:39.603775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.500 [2024-11-20 12:37:39.603782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.500 [2024-11-20 12:37:39.603798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.500 qpair failed and we were unable to recover it. 00:27:56.760 [2024-11-20 12:37:39.613656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.760 [2024-11-20 12:37:39.613706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.760 [2024-11-20 12:37:39.613721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.760 [2024-11-20 12:37:39.613728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.760 [2024-11-20 12:37:39.613735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.760 [2024-11-20 12:37:39.613751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-11-20 12:37:39.623764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.760 [2024-11-20 12:37:39.623819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.760 [2024-11-20 12:37:39.623833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.760 [2024-11-20 12:37:39.623840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.760 [2024-11-20 12:37:39.623852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.760 [2024-11-20 12:37:39.623867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-11-20 12:37:39.633805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.760 [2024-11-20 12:37:39.633882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.760 [2024-11-20 12:37:39.633899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.760 [2024-11-20 12:37:39.633907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.760 [2024-11-20 12:37:39.633915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.760 [2024-11-20 12:37:39.633932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.760 qpair failed and we were unable to recover it. 00:27:56.760 [2024-11-20 12:37:39.643820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.760 [2024-11-20 12:37:39.643877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.760 [2024-11-20 12:37:39.643893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.643901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.643910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.643926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.653850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.653902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.653917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.653925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.653931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.653952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.663876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.663932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.663950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.663957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.663964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.663980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.673897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.673959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.673973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.673981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.673988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.674003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.683925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.683987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.684002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.684009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.684016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.684031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.693958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.694015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.694028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.694036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.694042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.694057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.703960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.704019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.704033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.704040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.704046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.704061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.714054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.714111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.714129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.714136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.714143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.714158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.724113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.724176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.724190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.724197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.724203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.724219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.734113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.734164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.734177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.734184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.734190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.734205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.744121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.744216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.744230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.744238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.744244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.744259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.754159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.754220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.754234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.754241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.754251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.754265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.764096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.764168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.764182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.761 [2024-11-20 12:37:39.764190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.761 [2024-11-20 12:37:39.764197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.761 [2024-11-20 12:37:39.764211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.761 qpair failed and we were unable to recover it. 00:27:56.761 [2024-11-20 12:37:39.774174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.761 [2024-11-20 12:37:39.774254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.761 [2024-11-20 12:37:39.774268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.774275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.774281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.774296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.784231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.784286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.784299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.784306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.784313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.784328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.794211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.794283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.794297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.794305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.794311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.794327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.804229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.804288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.804302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.804310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.804316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.804332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.814325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.814376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.814389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.814396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.814402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.814418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.824375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.824480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.824494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.824501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.824507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.824522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.834363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.834421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.834435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.834442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.834449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.834462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.844405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.844463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.844481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.844488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.844495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.844510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.854368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.854420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.854435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.854442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.854449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.854464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.864453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.864510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.864525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.864532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.864539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.864554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:56.762 [2024-11-20 12:37:39.874489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.762 [2024-11-20 12:37:39.874543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.762 [2024-11-20 12:37:39.874557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.762 [2024-11-20 12:37:39.874564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.762 [2024-11-20 12:37:39.874570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:56.762 [2024-11-20 12:37:39.874585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:56.762 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.884557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.884612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.884626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.884636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.884643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.884658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.894548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.894600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.894614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.894621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.894627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.894642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.904493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.904557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.904570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.904577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.904584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.904600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.914533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.914643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.914657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.914664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.914670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.914685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.924583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.924674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.924687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.924694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.924700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.924718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.934579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.934634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.934647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.934654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.934661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.934677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.944670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.944749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.944763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.023 [2024-11-20 12:37:39.944771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.023 [2024-11-20 12:37:39.944777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.023 [2024-11-20 12:37:39.944792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.023 qpair failed and we were unable to recover it. 00:27:57.023 [2024-11-20 12:37:39.954709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.023 [2024-11-20 12:37:39.954766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.023 [2024-11-20 12:37:39.954779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:39.954786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:39.954793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:39.954808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:39.964747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:39.964806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:39.964820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:39.964827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:39.964833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:39.964848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:39.974705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:39.974767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:39.974781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:39.974788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:39.974795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:39.974809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:39.984726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:39.984784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:39.984798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:39.984805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:39.984812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:39.984826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:39.994836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:39.994915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:39.994929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:39.994936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:39.994942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:39.994962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.004874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.004933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.004957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.004967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.004974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.004993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.014894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.014960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.014981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.014996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.015005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.015027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.024999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.025092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.025111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.025120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.025127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.025145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.034902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.034963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.034979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.034987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.034994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.035010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.045035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.045127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.045142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.045150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.045156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.045173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.055073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.055180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.055196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.055204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.055212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.055232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.065062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.065130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.065145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.065153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.065159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.065174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.075082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.024 [2024-11-20 12:37:40.075185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.024 [2024-11-20 12:37:40.075199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.024 [2024-11-20 12:37:40.075207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.024 [2024-11-20 12:37:40.075213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.024 [2024-11-20 12:37:40.075228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.024 qpair failed and we were unable to recover it. 00:27:57.024 [2024-11-20 12:37:40.085128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-11-20 12:37:40.085181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-11-20 12:37:40.085196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-11-20 12:37:40.085203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-11-20 12:37:40.085210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.025 [2024-11-20 12:37:40.085225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-11-20 12:37:40.095127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-11-20 12:37:40.095181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-11-20 12:37:40.095196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-11-20 12:37:40.095203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-11-20 12:37:40.095210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.025 [2024-11-20 12:37:40.095225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-11-20 12:37:40.105169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-11-20 12:37:40.105270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-11-20 12:37:40.105284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-11-20 12:37:40.105291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-11-20 12:37:40.105297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.025 [2024-11-20 12:37:40.105312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-11-20 12:37:40.115197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-11-20 12:37:40.115253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-11-20 12:37:40.115266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-11-20 12:37:40.115273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-11-20 12:37:40.115279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.025 [2024-11-20 12:37:40.115295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-11-20 12:37:40.125221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-11-20 12:37:40.125277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-11-20 12:37:40.125292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-11-20 12:37:40.125298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-11-20 12:37:40.125305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.025 [2024-11-20 12:37:40.125320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.025 [2024-11-20 12:37:40.135246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.025 [2024-11-20 12:37:40.135299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.025 [2024-11-20 12:37:40.135312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.025 [2024-11-20 12:37:40.135320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.025 [2024-11-20 12:37:40.135327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.025 [2024-11-20 12:37:40.135342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.025 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.145233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.145328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.145345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.145352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.145358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.145373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.155293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.155347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.155361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.155368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.155374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.155389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.165338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.165400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.165413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.165421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.165427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.165442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.175346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.175402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.175416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.175423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.175430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.175445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.185378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.185430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.185444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.185451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.185461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.185476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.195404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.195464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.195477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.195484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.195492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.195506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.205438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.205491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.205505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.205512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.205518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.205533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.215462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.215514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.215528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.215535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.215542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.215557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.225478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.225533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.225546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.225553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.225560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.225575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.235505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.235559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.235573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.235580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.235587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.235601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.245538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.245587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.245601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.245608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.245614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.245630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.255586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.255652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.255666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.255674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.255680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.255695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.265594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.265647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.265661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.265669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.265675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.265691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.275629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.275686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.275703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.275710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.275717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.275732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.285644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.294 [2024-11-20 12:37:40.285694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.294 [2024-11-20 12:37:40.285708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.294 [2024-11-20 12:37:40.285715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.294 [2024-11-20 12:37:40.285722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.294 [2024-11-20 12:37:40.285737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.294 qpair failed and we were unable to recover it. 00:27:57.294 [2024-11-20 12:37:40.295695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.295751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.295765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.295773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.295780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.295794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.305727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.305783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.305797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.305804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.305811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.305825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.315727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.315785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.315799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.315806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.315818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.315833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.325781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.325834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.325848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.325856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.325863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.325877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.335734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.335786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.335800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.335807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.335814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.335828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.345818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.345881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.345896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.345903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.345909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.345925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.355861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.355929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.355942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.355953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.355960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.355976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.365873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.365929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.365943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.365954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.365961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.365977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.375898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.375963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.375977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.375985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.375991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.376006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.385931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.385988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.386003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.386011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.386017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.386032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.295 [2024-11-20 12:37:40.395933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.295 [2024-11-20 12:37:40.395992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.295 [2024-11-20 12:37:40.396006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.295 [2024-11-20 12:37:40.396014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.295 [2024-11-20 12:37:40.396021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.295 [2024-11-20 12:37:40.396036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.295 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.406005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.558 [2024-11-20 12:37:40.406068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.558 [2024-11-20 12:37:40.406085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.558 [2024-11-20 12:37:40.406092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.558 [2024-11-20 12:37:40.406098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.558 [2024-11-20 12:37:40.406114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.558 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.416022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.558 [2024-11-20 12:37:40.416077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.558 [2024-11-20 12:37:40.416091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.558 [2024-11-20 12:37:40.416099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.558 [2024-11-20 12:37:40.416106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.558 [2024-11-20 12:37:40.416121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.558 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.426052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.558 [2024-11-20 12:37:40.426105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.558 [2024-11-20 12:37:40.426119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.558 [2024-11-20 12:37:40.426126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.558 [2024-11-20 12:37:40.426133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.558 [2024-11-20 12:37:40.426149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.558 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.436078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.558 [2024-11-20 12:37:40.436139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.558 [2024-11-20 12:37:40.436153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.558 [2024-11-20 12:37:40.436160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.558 [2024-11-20 12:37:40.436167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.558 [2024-11-20 12:37:40.436182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.558 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.446021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.558 [2024-11-20 12:37:40.446076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.558 [2024-11-20 12:37:40.446090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.558 [2024-11-20 12:37:40.446101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.558 [2024-11-20 12:37:40.446108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.558 [2024-11-20 12:37:40.446124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.558 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.456127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.558 [2024-11-20 12:37:40.456184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.558 [2024-11-20 12:37:40.456198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.558 [2024-11-20 12:37:40.456206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.558 [2024-11-20 12:37:40.456213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.558 [2024-11-20 12:37:40.456229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.558 qpair failed and we were unable to recover it. 00:27:57.558 [2024-11-20 12:37:40.466206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.466260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.466274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.466280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.466287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.466302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.476175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.476233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.476246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.476253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.476260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.476275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.486219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.486274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.486288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.486295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.486302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.486320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.496232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.496285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.496299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.496306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.496313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.496328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.506267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.506320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.506333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.506340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.506347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.506362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.516307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.516380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.516395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.516402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.516408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.516423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.526340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.526399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.526413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.526421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.526427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.526441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.536349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.536407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.536421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.536428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.536436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.536450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.546398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.546453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.546466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.546474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.546480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.546495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.556413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.556471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.556486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.556494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.556502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.556517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.566448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.559 [2024-11-20 12:37:40.566502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.559 [2024-11-20 12:37:40.566516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.559 [2024-11-20 12:37:40.566523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.559 [2024-11-20 12:37:40.566530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.559 [2024-11-20 12:37:40.566544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.559 qpair failed and we were unable to recover it. 00:27:57.559 [2024-11-20 12:37:40.576470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.576523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.576536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.576547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.576553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.576568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.586489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.586542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.586555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.586563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.586570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.586585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.596527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.596585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.596599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.596606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.596613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.596628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.606561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.606616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.606630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.606637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.606643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.606658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.616583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.616637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.616650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.616657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.616664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.616682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.626769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.626828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.626866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.626874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.626880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.626905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.636577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.636634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.636648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.636656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.636663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.636679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.646599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.646662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.646676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.646684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.646690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.646706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.656694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.656750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.656764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.656772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.656778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.656793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.560 [2024-11-20 12:37:40.666727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.560 [2024-11-20 12:37:40.666785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.560 [2024-11-20 12:37:40.666799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.560 [2024-11-20 12:37:40.666807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.560 [2024-11-20 12:37:40.666814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.560 [2024-11-20 12:37:40.666828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.560 qpair failed and we were unable to recover it. 00:27:57.821 [2024-11-20 12:37:40.676767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.821 [2024-11-20 12:37:40.676836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.821 [2024-11-20 12:37:40.676851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.821 [2024-11-20 12:37:40.676859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.821 [2024-11-20 12:37:40.676865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.821 [2024-11-20 12:37:40.676880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-11-20 12:37:40.686787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.821 [2024-11-20 12:37:40.686888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.821 [2024-11-20 12:37:40.686903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.821 [2024-11-20 12:37:40.686910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.821 [2024-11-20 12:37:40.686917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.821 [2024-11-20 12:37:40.686932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-11-20 12:37:40.696816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.821 [2024-11-20 12:37:40.696874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.821 [2024-11-20 12:37:40.696888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.821 [2024-11-20 12:37:40.696896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.821 [2024-11-20 12:37:40.696902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.821 [2024-11-20 12:37:40.696917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.821 qpair failed and we were unable to recover it. 00:27:57.821 [2024-11-20 12:37:40.706860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.821 [2024-11-20 12:37:40.706915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.821 [2024-11-20 12:37:40.706932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.821 [2024-11-20 12:37:40.706940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.706950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.706966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.716892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.716963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.716977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.716984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.716990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.717006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.726990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.727046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.727061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.727068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.727074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.727089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.736867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.736929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.736943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.736954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.736961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.736976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.746907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.746972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.746986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.746994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.747004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.747019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.757025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.757084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.757098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.757105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.757111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.757126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.767034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.767091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.767105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.767111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.767118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.767133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.777056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.777106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.777120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.777127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.777133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.777148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.787085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.787138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.787152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.787159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.787165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.787181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.797154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.797230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.797244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.797251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.797257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.797272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.807175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.807227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.822 [2024-11-20 12:37:40.807241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.822 [2024-11-20 12:37:40.807248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.822 [2024-11-20 12:37:40.807255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.822 [2024-11-20 12:37:40.807269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.822 qpair failed and we were unable to recover it. 00:27:57.822 [2024-11-20 12:37:40.817174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.822 [2024-11-20 12:37:40.817226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.817239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.817246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.817252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.817268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.827139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.827191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.827205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.827212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.827219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.827234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.837251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.837307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.837324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.837331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.837338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.837353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.847273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.847331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.847346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.847353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.847359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.847375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.857270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.857334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.857349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.857356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.857364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.857380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.867335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.867423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.867437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.867444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.867451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.867465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.877359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.877426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.877441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.877448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.877458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.877473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.887315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.887380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.887394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.887402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.887408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.887424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.897412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.897466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.897479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.897486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.897493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.897508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.907490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.907545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.907559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.907567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.907573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.907588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.917474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.917533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.917548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.917555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.917562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.917577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.823 qpair failed and we were unable to recover it. 00:27:57.823 [2024-11-20 12:37:40.927518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.823 [2024-11-20 12:37:40.927571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.823 [2024-11-20 12:37:40.927585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.823 [2024-11-20 12:37:40.927592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.823 [2024-11-20 12:37:40.927599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:57.823 [2024-11-20 12:37:40.927614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:57.824 qpair failed and we were unable to recover it. 00:27:58.084 [2024-11-20 12:37:40.937525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.084 [2024-11-20 12:37:40.937582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.084 [2024-11-20 12:37:40.937596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.084 [2024-11-20 12:37:40.937604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.084 [2024-11-20 12:37:40.937610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.084 [2024-11-20 12:37:40.937625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-11-20 12:37:40.947570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.084 [2024-11-20 12:37:40.947624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.084 [2024-11-20 12:37:40.947638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.084 [2024-11-20 12:37:40.947644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.084 [2024-11-20 12:37:40.947651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.084 [2024-11-20 12:37:40.947666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-11-20 12:37:40.957626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.084 [2024-11-20 12:37:40.957706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.084 [2024-11-20 12:37:40.957720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.084 [2024-11-20 12:37:40.957727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.084 [2024-11-20 12:37:40.957733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.084 [2024-11-20 12:37:40.957748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.084 qpair failed and we were unable to recover it. 00:27:58.084 [2024-11-20 12:37:40.967612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.084 [2024-11-20 12:37:40.967673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.084 [2024-11-20 12:37:40.967686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.084 [2024-11-20 12:37:40.967694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:40.967701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:40.967716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:40.977675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:40.977781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:40.977795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:40.977802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:40.977808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:40.977823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:40.987700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:40.987755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:40.987770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:40.987777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:40.987784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:40.987799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:40.997721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:40.997815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:40.997830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:40.997838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:40.997844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:40.997859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.007728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.007825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.007839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.007850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.007856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:41.007871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.017671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.017729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.017743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.017751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.017757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:41.017772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.027769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.027827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.027841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.027849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.027855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:41.027871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.037831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.037889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.037903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.037911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.037918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:41.037933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.047791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.047850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.047865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.047872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.047878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:41.047896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.057872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.057961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.057976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.057985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.057992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.085 [2024-11-20 12:37:41.058009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.085 qpair failed and we were unable to recover it. 00:27:58.085 [2024-11-20 12:37:41.067899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.085 [2024-11-20 12:37:41.067971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.085 [2024-11-20 12:37:41.067986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.085 [2024-11-20 12:37:41.067993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.085 [2024-11-20 12:37:41.067999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.068015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.077916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.077982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.077997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.078005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.078012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.078027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.087954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.088012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.088026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.088034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.088040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.088055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.097992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.098050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.098065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.098074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.098081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.098095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.108035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.108088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.108102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.108109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.108115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.108130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.118066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.118136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.118151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.118159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.118166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.118182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.128069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.128125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.128139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.128146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.128153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.128169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.138132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.138186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.138201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.138212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.138218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.138233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.148082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.148142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.148156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.148163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.148170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.148185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.158159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.158213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.158227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.158234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.086 [2024-11-20 12:37:41.158241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.086 [2024-11-20 12:37:41.158255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.086 qpair failed and we were unable to recover it. 00:27:58.086 [2024-11-20 12:37:41.168180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.086 [2024-11-20 12:37:41.168275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.086 [2024-11-20 12:37:41.168289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.086 [2024-11-20 12:37:41.168297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.087 [2024-11-20 12:37:41.168303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.087 [2024-11-20 12:37:41.168318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-11-20 12:37:41.178142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.087 [2024-11-20 12:37:41.178197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.087 [2024-11-20 12:37:41.178211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.087 [2024-11-20 12:37:41.178218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.087 [2024-11-20 12:37:41.178225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.087 [2024-11-20 12:37:41.178245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-11-20 12:37:41.188236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.087 [2024-11-20 12:37:41.188290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.087 [2024-11-20 12:37:41.188304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.087 [2024-11-20 12:37:41.188311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.087 [2024-11-20 12:37:41.188317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.087 [2024-11-20 12:37:41.188332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.087 [2024-11-20 12:37:41.198206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.087 [2024-11-20 12:37:41.198264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.087 [2024-11-20 12:37:41.198278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.087 [2024-11-20 12:37:41.198285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.087 [2024-11-20 12:37:41.198292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.087 [2024-11-20 12:37:41.198306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.087 qpair failed and we were unable to recover it. 00:27:58.347 [2024-11-20 12:37:41.208237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.347 [2024-11-20 12:37:41.208295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.347 [2024-11-20 12:37:41.208309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.347 [2024-11-20 12:37:41.208316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.347 [2024-11-20 12:37:41.208322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.347 [2024-11-20 12:37:41.208337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.347 qpair failed and we were unable to recover it. 00:27:58.347 [2024-11-20 12:37:41.218262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.347 [2024-11-20 12:37:41.218315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.347 [2024-11-20 12:37:41.218329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.347 [2024-11-20 12:37:41.218336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.347 [2024-11-20 12:37:41.218343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.347 [2024-11-20 12:37:41.218358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.347 qpair failed and we were unable to recover it. 00:27:58.347 [2024-11-20 12:37:41.228355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.347 [2024-11-20 12:37:41.228409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.347 [2024-11-20 12:37:41.228423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.347 [2024-11-20 12:37:41.228430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.347 [2024-11-20 12:37:41.228436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.347 [2024-11-20 12:37:41.228451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.347 qpair failed and we were unable to recover it. 00:27:58.347 [2024-11-20 12:37:41.238377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.347 [2024-11-20 12:37:41.238430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.347 [2024-11-20 12:37:41.238444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.347 [2024-11-20 12:37:41.238451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.347 [2024-11-20 12:37:41.238458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.347 [2024-11-20 12:37:41.238473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.347 qpair failed and we were unable to recover it. 00:27:58.347 [2024-11-20 12:37:41.248403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.248491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.248505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.248512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.248518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.248533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.258439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.258491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.258505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.258512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.258519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.258534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.268395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.268451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.268468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.268475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.268482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.268497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.278499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.278555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.278569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.278576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.278583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.278598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.288529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.288584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.288599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.288606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.288613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.288627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.298539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.298593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.298606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.298614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.298620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.298635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.308632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.308687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.308701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.308708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.308718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.308733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.318614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.318673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.318687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.318695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.318702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.318717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.328574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.328628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.328642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.328649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.328655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.328671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.338681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.338731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.338745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.338752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.338757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.338772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.348716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.348782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.348797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.348804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.348810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.348826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.358718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.358793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.358808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.358815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.358821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.358836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.368748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.348 [2024-11-20 12:37:41.368806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.348 [2024-11-20 12:37:41.368821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.348 [2024-11-20 12:37:41.368828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.348 [2024-11-20 12:37:41.368834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.348 [2024-11-20 12:37:41.368850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.348 qpair failed and we were unable to recover it. 00:27:58.348 [2024-11-20 12:37:41.378701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.378758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.378772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.378780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.378786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.378801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.388807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.388862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.388876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.388884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.388890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.388906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.398841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.398898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.398915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.398923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.398930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.398945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.408897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.408956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.408971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.408979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.408986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.409001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.418909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.418982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.418997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.419004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.419010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.419025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.428858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.428913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.428927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.428934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.428941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.428962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.438954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.439009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.439023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.439030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.439039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.439055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.448986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.449043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.449057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.449064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.449070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.449086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.349 [2024-11-20 12:37:41.459032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.349 [2024-11-20 12:37:41.459100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.349 [2024-11-20 12:37:41.459114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.349 [2024-11-20 12:37:41.459121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.349 [2024-11-20 12:37:41.459128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.349 [2024-11-20 12:37:41.459143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.349 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.469040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.469094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.469109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.469116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.469123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.469138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.479060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.479120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.479133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.479141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.479147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.479162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.489117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.489173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.489187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.489194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.489201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.489216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.499168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.499226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.499239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.499247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.499254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.499269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.509222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.509301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.509315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.509323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.509330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.509344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.519217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.519275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.519290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.519298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.519304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.519319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.529209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.529269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.529283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.529290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.529298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.529313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.539291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.539368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.539382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.539390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.539396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.539411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.549224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.549316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.549330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.549337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.549343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.549358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.559312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.559386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.559401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.559408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.559414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.559429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.569324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.569379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.569392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.569403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.569410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.569425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.579350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.579410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.579424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.579431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.610 [2024-11-20 12:37:41.579437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.610 [2024-11-20 12:37:41.579453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.610 qpair failed and we were unable to recover it. 00:27:58.610 [2024-11-20 12:37:41.589378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.610 [2024-11-20 12:37:41.589431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.610 [2024-11-20 12:37:41.589446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.610 [2024-11-20 12:37:41.589453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.589460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.589475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.599404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.599461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.599475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.599482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.599488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.599504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.609449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.609513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.609527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.609534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.609540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.609558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.619464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.619514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.619527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.619534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.619541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.619557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.629461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.629552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.629566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.629573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.629580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.629594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.639514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.639569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.639582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.639589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.639596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.639612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.649540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.649597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.649612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.649619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.649626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.649641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.659563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.659622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.659636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.659643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.659650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.659665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.669604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.669659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.669673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.669681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.669688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.669703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.679614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.679672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.679686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.679693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.679701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.679715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.689693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.689796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.689810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.689817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.689824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.689840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.699675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.699727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.699741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.699751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.699758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.699773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.709631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.709699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.709713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.709720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.709727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.611 [2024-11-20 12:37:41.709742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.611 qpair failed and we were unable to recover it. 00:27:58.611 [2024-11-20 12:37:41.719742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.611 [2024-11-20 12:37:41.719800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.611 [2024-11-20 12:37:41.719815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.611 [2024-11-20 12:37:41.719822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.611 [2024-11-20 12:37:41.719829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.612 [2024-11-20 12:37:41.719843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.612 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.729806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.729859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.729872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.729879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.729886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.729901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.739798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.739854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.739868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.739875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.739882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.739899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.749826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.749881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.749896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.749903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.749910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.749925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.759847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.759904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.759918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.759925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.759932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.759954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.769882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.769936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.769954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.769961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.769968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.769983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.779904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.779965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.779979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.779986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.779992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.780007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.872 [2024-11-20 12:37:41.789934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.872 [2024-11-20 12:37:41.790003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.872 [2024-11-20 12:37:41.790017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.872 [2024-11-20 12:37:41.790024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.872 [2024-11-20 12:37:41.790030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.872 [2024-11-20 12:37:41.790045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.872 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.799977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.800042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.800055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.800063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.800069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.800084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.810000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.810059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.810073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.810081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.810087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.810103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.820076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.820181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.820195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.820203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.820210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.820225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.830037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.830096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.830113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.830120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.830127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.830142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.840073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.840133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.840147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.840154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.840161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.840176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.850110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.850172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.850187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.850194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.850200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.850216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.860168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.860271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.860286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.860293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.860299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.860315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.870114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.870172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.870187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.870195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.870205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.870220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.880227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.880288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.880302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.880309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.880316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.880331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.890244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.890303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.890317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.890325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.890332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.890347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.900263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.900317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.900333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.900340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.900346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.900362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.910285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.910344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.910358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.910366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.910372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.910387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.920315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.873 [2024-11-20 12:37:41.920396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.873 [2024-11-20 12:37:41.920410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.873 [2024-11-20 12:37:41.920418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.873 [2024-11-20 12:37:41.920424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.873 [2024-11-20 12:37:41.920439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.873 qpair failed and we were unable to recover it. 00:27:58.873 [2024-11-20 12:37:41.930403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.874 [2024-11-20 12:37:41.930459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.874 [2024-11-20 12:37:41.930473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.874 [2024-11-20 12:37:41.930480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.874 [2024-11-20 12:37:41.930486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.874 [2024-11-20 12:37:41.930501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.874 qpair failed and we were unable to recover it. 00:27:58.874 [2024-11-20 12:37:41.940400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.874 [2024-11-20 12:37:41.940502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.874 [2024-11-20 12:37:41.940515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.874 [2024-11-20 12:37:41.940522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.874 [2024-11-20 12:37:41.940529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.874 [2024-11-20 12:37:41.940544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.874 qpair failed and we were unable to recover it. 00:27:58.874 [2024-11-20 12:37:41.950383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.874 [2024-11-20 12:37:41.950439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.874 [2024-11-20 12:37:41.950452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.874 [2024-11-20 12:37:41.950459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.874 [2024-11-20 12:37:41.950465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.874 [2024-11-20 12:37:41.950480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.874 qpair failed and we were unable to recover it. 00:27:58.874 [2024-11-20 12:37:41.960463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.874 [2024-11-20 12:37:41.960522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.874 [2024-11-20 12:37:41.960539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.874 [2024-11-20 12:37:41.960546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.874 [2024-11-20 12:37:41.960553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.874 [2024-11-20 12:37:41.960568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.874 qpair failed and we were unable to recover it. 00:27:58.874 [2024-11-20 12:37:41.970392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.874 [2024-11-20 12:37:41.970457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.874 [2024-11-20 12:37:41.970472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.874 [2024-11-20 12:37:41.970479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.874 [2024-11-20 12:37:41.970486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.874 [2024-11-20 12:37:41.970500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.874 qpair failed and we were unable to recover it. 00:27:58.874 [2024-11-20 12:37:41.980499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.874 [2024-11-20 12:37:41.980556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.874 [2024-11-20 12:37:41.980570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.874 [2024-11-20 12:37:41.980577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.874 [2024-11-20 12:37:41.980584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:58.874 [2024-11-20 12:37:41.980599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:58.874 qpair failed and we were unable to recover it. 00:27:59.134 [2024-11-20 12:37:41.990475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.134 [2024-11-20 12:37:41.990529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.134 [2024-11-20 12:37:41.990543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.134 [2024-11-20 12:37:41.990550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.134 [2024-11-20 12:37:41.990556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.134 [2024-11-20 12:37:41.990571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.134 qpair failed and we were unable to recover it. 00:27:59.134 [2024-11-20 12:37:42.000602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.134 [2024-11-20 12:37:42.000660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.134 [2024-11-20 12:37:42.000674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.134 [2024-11-20 12:37:42.000681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.134 [2024-11-20 12:37:42.000691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.134 [2024-11-20 12:37:42.000706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.134 qpair failed and we were unable to recover it. 00:27:59.134 [2024-11-20 12:37:42.010597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.134 [2024-11-20 12:37:42.010657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.134 [2024-11-20 12:37:42.010672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.134 [2024-11-20 12:37:42.010680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.134 [2024-11-20 12:37:42.010687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.134 [2024-11-20 12:37:42.010702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.134 qpair failed and we were unable to recover it. 00:27:59.134 [2024-11-20 12:37:42.020639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.134 [2024-11-20 12:37:42.020714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.134 [2024-11-20 12:37:42.020728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.134 [2024-11-20 12:37:42.020735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.134 [2024-11-20 12:37:42.020741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.134 [2024-11-20 12:37:42.020755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.134 qpair failed and we were unable to recover it. 00:27:59.134 [2024-11-20 12:37:42.030656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.134 [2024-11-20 12:37:42.030709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.134 [2024-11-20 12:37:42.030722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.134 [2024-11-20 12:37:42.030729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.134 [2024-11-20 12:37:42.030735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.134 [2024-11-20 12:37:42.030750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.134 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.040672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.040727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.040740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.040747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.040754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.040769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.050716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.050785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.050799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.050807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.050813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.050828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.060728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.060788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.060801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.060809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.060815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.060830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.070763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.070826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.070840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.070847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.070853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.070868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.080805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.080864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.080878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.080885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.080891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.080906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.090820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.090886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.090900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.090908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.090915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.090929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.100784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.100837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.100851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.100858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.100864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.100879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.110877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.110933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.110950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.110958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.110964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.110979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.120898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.120970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.120984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.120991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.120997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.121011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.130938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.131010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.131024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.131034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.131041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.131056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.140974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.141027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.141041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.141048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.141054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.141070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.150990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.151049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.151063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.151070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.151076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.151091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.161012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.135 [2024-11-20 12:37:42.161083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.135 [2024-11-20 12:37:42.161097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.135 [2024-11-20 12:37:42.161104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.135 [2024-11-20 12:37:42.161110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.135 [2024-11-20 12:37:42.161125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.135 qpair failed and we were unable to recover it. 00:27:59.135 [2024-11-20 12:37:42.171079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.171148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.171162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.171169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.171175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.171196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.181095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.181156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.181170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.181177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.181184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.181198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.191124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.191182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.191196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.191204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.191210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.191225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.201153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.201222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.201237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.201245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.201251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.201265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.211173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.211231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.211245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.211253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.211260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.211274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.221190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.221302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.221315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.221323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.221330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.221345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.231143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.231199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.231213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.231220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.231227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.231242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.136 [2024-11-20 12:37:42.241276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.136 [2024-11-20 12:37:42.241344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.136 [2024-11-20 12:37:42.241359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.136 [2024-11-20 12:37:42.241366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.136 [2024-11-20 12:37:42.241373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.136 [2024-11-20 12:37:42.241389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.136 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.251276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.251331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.251345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.251352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.251359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.251374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.261299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.261349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.261365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.261373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.261380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.261394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.271332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.271399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.271413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.271421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.271426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.271441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.281361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.281417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.281432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.281440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.281446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.281462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.291395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.291451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.291464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.291471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.291478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.291493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.301432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.301489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.301503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.301511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.301518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.301535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.311458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.311510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.311523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.311530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.311536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.311551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.321532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.321587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.321601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.321608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.321615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.321629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.331514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.331570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.331584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.331591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.397 [2024-11-20 12:37:42.331598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.397 [2024-11-20 12:37:42.331613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.397 qpair failed and we were unable to recover it. 00:27:59.397 [2024-11-20 12:37:42.341531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.397 [2024-11-20 12:37:42.341585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.397 [2024-11-20 12:37:42.341599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.397 [2024-11-20 12:37:42.341606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.341612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.341627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.351542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.351632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.351647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.351654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.351660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.351676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.361597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.361650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.361665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.361672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.361679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.361695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.371668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.371730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.371744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.371751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.371758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.371772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.381675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.381727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.381741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.381748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.381755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.381770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.391696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.391752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.391769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.391777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.391783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.391797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.401721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.401779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.401793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.401800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.401807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.401822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.411752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.411816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.411831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.411838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.411845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.411861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.421770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.421837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.421851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.421858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.421865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.421879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.431789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.431839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.431853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.431860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.431870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.431885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.441825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.441880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.441895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.441902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.441909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.441924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.451858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.451912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.451926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.451933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.451940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.451960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.461874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.461924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.461938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.398 [2024-11-20 12:37:42.461945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.398 [2024-11-20 12:37:42.461956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.398 [2024-11-20 12:37:42.461971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.398 qpair failed and we were unable to recover it. 00:27:59.398 [2024-11-20 12:37:42.471831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.398 [2024-11-20 12:37:42.471892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.398 [2024-11-20 12:37:42.471906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.399 [2024-11-20 12:37:42.471913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.399 [2024-11-20 12:37:42.471919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.399 [2024-11-20 12:37:42.471935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.399 qpair failed and we were unable to recover it. 00:27:59.399 [2024-11-20 12:37:42.481937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.399 [2024-11-20 12:37:42.482000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.399 [2024-11-20 12:37:42.482014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.399 [2024-11-20 12:37:42.482022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.399 [2024-11-20 12:37:42.482029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.399 [2024-11-20 12:37:42.482044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.399 qpair failed and we were unable to recover it. 00:27:59.399 [2024-11-20 12:37:42.491993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.399 [2024-11-20 12:37:42.492049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.399 [2024-11-20 12:37:42.492063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.399 [2024-11-20 12:37:42.492070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.399 [2024-11-20 12:37:42.492077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.399 [2024-11-20 12:37:42.492092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.399 qpair failed and we were unable to recover it. 00:27:59.399 [2024-11-20 12:37:42.501917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.399 [2024-11-20 12:37:42.501977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.399 [2024-11-20 12:37:42.501992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.399 [2024-11-20 12:37:42.501999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.399 [2024-11-20 12:37:42.502006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.399 [2024-11-20 12:37:42.502021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.399 qpair failed and we were unable to recover it. 00:27:59.659 [2024-11-20 12:37:42.511979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.659 [2024-11-20 12:37:42.512037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.659 [2024-11-20 12:37:42.512052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.659 [2024-11-20 12:37:42.512060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.659 [2024-11-20 12:37:42.512067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.659 [2024-11-20 12:37:42.512082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.659 qpair failed and we were unable to recover it. 00:27:59.659 [2024-11-20 12:37:42.521992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.659 [2024-11-20 12:37:42.522052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.659 [2024-11-20 12:37:42.522069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.659 [2024-11-20 12:37:42.522077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.659 [2024-11-20 12:37:42.522083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.659 [2024-11-20 12:37:42.522098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.659 qpair failed and we were unable to recover it. 00:27:59.659 [2024-11-20 12:37:42.532076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.659 [2024-11-20 12:37:42.532133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.659 [2024-11-20 12:37:42.532147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.659 [2024-11-20 12:37:42.532154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.659 [2024-11-20 12:37:42.532160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.659 [2024-11-20 12:37:42.532175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.659 qpair failed and we were unable to recover it. 00:27:59.659 [2024-11-20 12:37:42.542120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.659 [2024-11-20 12:37:42.542203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.659 [2024-11-20 12:37:42.542218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.542225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.542231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.542246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.552136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.552187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.552202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.552209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.552216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.552231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.562098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.562156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.562170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.562180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.562187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.562202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.572120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.572186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.572202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.572210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.572216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.572232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.582261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.582318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.582333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.582341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.582348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.582364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.592172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.592238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.592255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.592263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.592270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.592285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.602214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.602276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.602291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.602298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.602305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.602320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.612251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.612304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.612320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.612327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.612334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.612350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.622272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.622327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.622342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.622349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.622356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.622370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.632297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.632361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.632376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.632384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.632390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.632406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.642433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.642497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.642512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.642520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.642526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.642541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.652461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.652541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.652555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.652563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.652569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.652584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.662402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.662456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.660 [2024-11-20 12:37:42.662470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.660 [2024-11-20 12:37:42.662477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.660 [2024-11-20 12:37:42.662484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.660 [2024-11-20 12:37:42.662498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.660 qpair failed and we were unable to recover it. 00:27:59.660 [2024-11-20 12:37:42.672469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.660 [2024-11-20 12:37:42.672522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.672545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.672553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.672560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.672580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.682434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.682493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.682508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.682516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.682522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.682537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.692524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.692581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.692595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.692608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.692614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.692630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.702502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.702556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.702571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.702578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.702585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.702600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.712532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.712596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.712610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.712617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.712624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.712639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.722643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.722700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.722714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.722721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.722727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.722743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.732633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.732688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.732702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.732710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.732716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.732735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.742718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.742783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.742798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.742806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.742812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.742827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.752654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.752706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.752720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.752727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.752734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.752749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.762732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.762790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.762805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.762813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.762819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.762835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.661 [2024-11-20 12:37:42.772774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.661 [2024-11-20 12:37:42.772851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.661 [2024-11-20 12:37:42.772865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.661 [2024-11-20 12:37:42.772873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.661 [2024-11-20 12:37:42.772879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.661 [2024-11-20 12:37:42.772893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.661 qpair failed and we were unable to recover it. 00:27:59.921 [2024-11-20 12:37:42.782757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.921 [2024-11-20 12:37:42.782847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.921 [2024-11-20 12:37:42.782861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.921 [2024-11-20 12:37:42.782869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.921 [2024-11-20 12:37:42.782875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.921 [2024-11-20 12:37:42.782890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.921 qpair failed and we were unable to recover it. 00:27:59.921 [2024-11-20 12:37:42.792763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.921 [2024-11-20 12:37:42.792818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.921 [2024-11-20 12:37:42.792832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.921 [2024-11-20 12:37:42.792839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.921 [2024-11-20 12:37:42.792846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.921 [2024-11-20 12:37:42.792861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.921 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.802795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.802848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.802862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.802870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.802876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.802891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.812928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.812991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.813005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.813013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.813020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.813035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.822845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.822909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.822927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.822934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.822940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.822959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.832880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.832940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.832960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.832967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.832974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.832989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.842983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.843040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.843054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.843061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.843068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.843084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.853006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.853073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.853088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.853096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.853102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.853118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.862974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.863031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.863047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.863054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.863061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.863080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.873037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.873090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.873104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.873112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.873118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.873133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.883072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.883131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.883144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.883151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.883159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.883174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.893122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.893183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.893197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.893205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.893211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.893227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.903073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.903130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.903145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.903154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.903161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.903177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.913226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.913329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.913343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.913350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.922 [2024-11-20 12:37:42.913357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.922 [2024-11-20 12:37:42.913371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.922 qpair failed and we were unable to recover it. 00:27:59.922 [2024-11-20 12:37:42.923196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.922 [2024-11-20 12:37:42.923250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.922 [2024-11-20 12:37:42.923264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.922 [2024-11-20 12:37:42.923271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.923277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.923292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.933225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.933301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.933314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.933321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.933327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.933343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.943247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.943304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.943318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.943326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.943333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.943347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.953284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.953351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.953368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.953375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.953382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.953396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.963307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.963361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.963374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.963381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.963387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.963403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.973319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.973372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.973386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.973393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.973399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.973416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.983314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.983413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.983428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.983436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.983443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.983458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:42.993391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:42.993443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:42.993457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:42.993463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:42.993473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:42.993488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:43.003431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:43.003491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:43.003505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:43.003512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:43.003518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:43.003533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:43.013470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:43.013542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:43.013556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:43.013563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:43.013569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:43.013583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:43.023528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:43.023578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:43.023592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:43.023599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:43.023605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:43.023620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:27:59.923 [2024-11-20 12:37:43.033580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.923 [2024-11-20 12:37:43.033637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.923 [2024-11-20 12:37:43.033651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.923 [2024-11-20 12:37:43.033658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.923 [2024-11-20 12:37:43.033664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:27:59.923 [2024-11-20 12:37:43.033679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:59.923 qpair failed and we were unable to recover it. 00:28:00.183 [2024-11-20 12:37:43.043571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.043649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.043665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.043672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.043678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.043692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.053518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.053570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.053584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.053591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.053598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.053613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.063598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.063679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.063692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.063700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.063706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.063721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.073649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.073701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.073715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.073722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.073728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.073743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.083664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.083718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.083735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.083742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.083748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.083764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.093631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.093699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.093714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.093721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.093727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.093743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.103717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.103771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.103787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.103795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.103802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.103818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.113771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.113824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.113838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.113846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.113852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.113867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.123781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.123879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.123893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.123904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.123911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.123926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.133806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.133860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.133874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.133881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.133887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.133902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.143845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.143896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.143910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.143917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.143924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.143939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.153869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.153938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.153956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.153964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.153970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.153986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.184 [2024-11-20 12:37:43.163886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.184 [2024-11-20 12:37:43.163943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.184 [2024-11-20 12:37:43.163961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.184 [2024-11-20 12:37:43.163969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.184 [2024-11-20 12:37:43.163975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.184 [2024-11-20 12:37:43.163990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.184 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.173935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.173992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.174006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.174014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.174020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.174036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.183955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.184025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.184040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.184047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.184054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.184069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.193995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.194060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.194074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.194082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.194088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.194103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.204024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.204082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.204096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.204103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.204109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.204124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.214060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.214120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.214134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.214141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.214148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.214163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.224078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.224135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.224149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.224156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.224162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.224177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.234104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.234159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.234175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.234182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.234189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.234204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.244154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.244209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.244223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.244230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.244237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.244252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.254103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.254157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.254171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.254180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.254187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.254203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.264198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.264252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.264266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.264273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.264280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.264295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.274253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.274305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.274318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.274325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.274331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.274347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.284288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.284363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.284377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.284384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.284391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.284406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.185 [2024-11-20 12:37:43.294341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.185 [2024-11-20 12:37:43.294401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.185 [2024-11-20 12:37:43.294414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.185 [2024-11-20 12:37:43.294422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.185 [2024-11-20 12:37:43.294428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.185 [2024-11-20 12:37:43.294447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.185 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.304322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.304375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.304388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.304395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.304402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.304417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.314322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.314372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.314385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.314392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.314398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.314413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.324364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.324419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.324433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.324440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.324447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.324462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.334395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.334459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.334473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.334480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.334486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.334502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.344420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.344476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.344490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.344498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.344504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.344520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.354457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.354513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.354527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.354535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.354542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.354557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.364473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.364533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.364547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.364554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.364561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.364576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.374496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.374552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.374566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.374573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.374580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.374594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.384535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.384588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.384605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.384613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.384619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.384634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.394554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.394606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.394620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.394627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.394634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.394649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.404587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.404643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.404657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.404664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.404671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.404686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.414622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.446 [2024-11-20 12:37:43.414682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.446 [2024-11-20 12:37:43.414695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.446 [2024-11-20 12:37:43.414703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.446 [2024-11-20 12:37:43.414710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.446 [2024-11-20 12:37:43.414725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.446 qpair failed and we were unable to recover it. 00:28:00.446 [2024-11-20 12:37:43.424650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.424704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.424718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.424726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.424737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.424753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.434671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.434726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.434739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.434746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.434752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.434768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.444705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.444762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.444776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.444783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.444790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.444805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.454758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.454821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.454834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.454842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.454848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.454864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.464768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.464820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.464834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.464841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.464848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.464863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.474781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.474877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.474891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.474898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.474904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.474919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.484819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.484880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.484893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.484901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.484907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.484922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.494859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.494917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.494931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.494938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.494945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.494965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.504868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.504924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.504938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.504946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.504956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.504971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.514905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.514959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.514977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.514985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.514991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.515007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.524932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.524998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.525012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.525019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.525026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.525042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.534941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.535006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.535020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.535028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.535034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.535048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.544980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.545037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.447 [2024-11-20 12:37:43.545051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.447 [2024-11-20 12:37:43.545058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.447 [2024-11-20 12:37:43.545066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.447 [2024-11-20 12:37:43.545080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.447 qpair failed and we were unable to recover it. 00:28:00.447 [2024-11-20 12:37:43.555031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.447 [2024-11-20 12:37:43.555081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.448 [2024-11-20 12:37:43.555095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.448 [2024-11-20 12:37:43.555102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.448 [2024-11-20 12:37:43.555112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.448 [2024-11-20 12:37:43.555127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.448 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.565044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.565100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.565114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.565122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.565129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.565144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.575082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.575140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.575154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.575162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.575168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.575183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.585110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.585164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.585177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.585184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.585191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.585206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.595157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.595211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.595225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.595233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.595240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.595255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.605178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.605235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.605249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.605256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.605263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.605278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.615189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.615243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.615258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.615266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.615272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.615287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.625269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.625326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.625340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.625347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.625353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.625368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.635254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.635326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.635340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.635347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.635353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.635368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.645303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.645375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.645394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.645401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.645408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.708 [2024-11-20 12:37:43.645425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.708 qpair failed and we were unable to recover it. 00:28:00.708 [2024-11-20 12:37:43.655321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.708 [2024-11-20 12:37:43.655387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.708 [2024-11-20 12:37:43.655401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.708 [2024-11-20 12:37:43.655409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.708 [2024-11-20 12:37:43.655416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.655432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.665337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.665390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.665405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.665412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.665419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.665435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.675365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.675415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.675430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.675437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.675444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.675460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.685408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.685464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.685478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.685489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.685496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.685511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.695480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.695538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.695551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.695558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.695565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.695580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.705471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.705518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.705532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.705539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.705545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.705560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.715485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.715538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.715552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.715559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.715566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.715581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.725451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.725510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.725524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.725531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.725538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.725554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.735548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.735604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.735618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.735625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.735632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.735647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.745554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.745607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.745622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.745629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.745635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.745650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.755603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.755658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.755672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.755680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.755688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.755703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.765615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.765668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.765682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.765689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.765696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.765711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.775656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.775719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.775733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.775740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.709 [2024-11-20 12:37:43.775747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.709 [2024-11-20 12:37:43.775761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.709 qpair failed and we were unable to recover it. 00:28:00.709 [2024-11-20 12:37:43.785679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.709 [2024-11-20 12:37:43.785734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.709 [2024-11-20 12:37:43.785748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.709 [2024-11-20 12:37:43.785756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.710 [2024-11-20 12:37:43.785764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.710 [2024-11-20 12:37:43.785779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.710 qpair failed and we were unable to recover it. 00:28:00.710 [2024-11-20 12:37:43.795714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.710 [2024-11-20 12:37:43.795767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.710 [2024-11-20 12:37:43.795781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.710 [2024-11-20 12:37:43.795788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.710 [2024-11-20 12:37:43.795796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.710 [2024-11-20 12:37:43.795810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.710 qpair failed and we were unable to recover it. 00:28:00.710 [2024-11-20 12:37:43.805742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.710 [2024-11-20 12:37:43.805799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.710 [2024-11-20 12:37:43.805813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.710 [2024-11-20 12:37:43.805821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.710 [2024-11-20 12:37:43.805827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.710 [2024-11-20 12:37:43.805842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.710 qpair failed and we were unable to recover it. 00:28:00.710 [2024-11-20 12:37:43.815773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.710 [2024-11-20 12:37:43.815828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.710 [2024-11-20 12:37:43.815842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.710 [2024-11-20 12:37:43.815853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.710 [2024-11-20 12:37:43.815859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.710 [2024-11-20 12:37:43.815875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.710 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.825808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.825860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.825874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.825881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.825888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.825903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.835837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.835889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.835903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.835910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.835916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.835932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.845862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.845918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.845932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.845940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.845950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.845975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.855910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.855966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.855981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.855988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.855995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.856013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.865920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.865973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.865987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.865994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.866001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.866016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.875953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.876007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.876022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.876029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.876036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.876052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.886000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.886103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.886117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.886124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.886131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.886146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.896062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.896117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.896132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.896139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.970 [2024-11-20 12:37:43.896145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.970 [2024-11-20 12:37:43.896160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.970 qpair failed and we were unable to recover it. 00:28:00.970 [2024-11-20 12:37:43.906058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.970 [2024-11-20 12:37:43.906121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.970 [2024-11-20 12:37:43.906136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.970 [2024-11-20 12:37:43.906143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.906150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.906165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.916034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.916088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.916102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.916109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.916116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.916131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.926136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.926211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.926226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.926234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.926240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.926255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.936082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.936136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.936150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.936157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.936163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.936178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.946097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.946151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.946169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.946176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.946184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.946199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.956185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.956240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.956254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.956261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.956268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.956283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.966216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.966275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.966289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.966296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.966303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.966317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.976244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.976298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.976312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.976319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.976326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.976340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.986207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.986260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.986274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.986281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.986292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.986307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:43.996294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:43.996372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:43.996386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:43.996393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:43.996399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:43.996413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:44.006272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:44.006328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:44.006341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:44.006349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:44.006356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:44.006370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:44.016368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:44.016444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:44.016459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:44.016466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:44.016472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:44.016488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:44.026393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:44.026448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:44.026462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.971 [2024-11-20 12:37:44.026469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.971 [2024-11-20 12:37:44.026476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.971 [2024-11-20 12:37:44.026492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.971 qpair failed and we were unable to recover it. 00:28:00.971 [2024-11-20 12:37:44.036400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.971 [2024-11-20 12:37:44.036498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.971 [2024-11-20 12:37:44.036512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.972 [2024-11-20 12:37:44.036519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.972 [2024-11-20 12:37:44.036526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.972 [2024-11-20 12:37:44.036541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.972 qpair failed and we were unable to recover it. 00:28:00.972 [2024-11-20 12:37:44.046449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.972 [2024-11-20 12:37:44.046528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.972 [2024-11-20 12:37:44.046542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.972 [2024-11-20 12:37:44.046549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.972 [2024-11-20 12:37:44.046555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.972 [2024-11-20 12:37:44.046570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.972 qpair failed and we were unable to recover it. 00:28:00.972 [2024-11-20 12:37:44.056458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.972 [2024-11-20 12:37:44.056539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.972 [2024-11-20 12:37:44.056553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.972 [2024-11-20 12:37:44.056560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.972 [2024-11-20 12:37:44.056566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.972 [2024-11-20 12:37:44.056581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.972 qpair failed and we were unable to recover it. 00:28:00.972 [2024-11-20 12:37:44.066424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.972 [2024-11-20 12:37:44.066482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.972 [2024-11-20 12:37:44.066496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.972 [2024-11-20 12:37:44.066503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.972 [2024-11-20 12:37:44.066510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.972 [2024-11-20 12:37:44.066525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.972 qpair failed and we were unable to recover it. 00:28:00.972 [2024-11-20 12:37:44.076526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.972 [2024-11-20 12:37:44.076580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.972 [2024-11-20 12:37:44.076597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.972 [2024-11-20 12:37:44.076604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.972 [2024-11-20 12:37:44.076610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:00.972 [2024-11-20 12:37:44.076625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.972 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.086518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.086572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.086586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.086593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.086599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.086614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.096541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.096642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.096658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.096665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.096671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.096687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.106667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.106718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.106732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.106739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.106745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.106761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.116594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.116650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.116664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.116671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.116684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.116700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.126719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.126820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.126834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.126840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.126847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.126862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.136734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.136798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.136814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.136822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.136829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.136844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.146744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.146800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.146815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.146822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.146829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.146845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.232 [2024-11-20 12:37:44.156757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.232 [2024-11-20 12:37:44.156827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.232 [2024-11-20 12:37:44.156842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.232 [2024-11-20 12:37:44.156849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.232 [2024-11-20 12:37:44.156856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.232 [2024-11-20 12:37:44.156873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.232 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.166795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.166850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.166865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.166872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.166878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.166893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.176750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.176806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.176819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.176826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.176833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.176848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.186766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.186828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.186843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.186851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.186857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.186872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.196798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.196862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.196876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.196884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.196890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.196905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.206877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.206935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.206960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.206967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.206975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.206990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.216924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.216986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.217000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.217007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.217014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.217029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.226961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.227012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.227026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.227033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.227039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.227055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.236979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.237040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.237054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.237062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.237068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.237083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.247051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.247110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.247124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.247134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.247141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.247156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.257042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.257099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.257114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.257121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.257129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.257144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.267012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.267069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.267085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.267092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.267099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.267114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.277112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.277169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.277183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.277191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.277197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.233 [2024-11-20 12:37:44.277212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.233 qpair failed and we were unable to recover it. 00:28:01.233 [2024-11-20 12:37:44.287129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.233 [2024-11-20 12:37:44.287182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.233 [2024-11-20 12:37:44.287197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.233 [2024-11-20 12:37:44.287204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.233 [2024-11-20 12:37:44.287211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.234 [2024-11-20 12:37:44.287230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-11-20 12:37:44.297152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.234 [2024-11-20 12:37:44.297210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.234 [2024-11-20 12:37:44.297224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.234 [2024-11-20 12:37:44.297231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.234 [2024-11-20 12:37:44.297238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.234 [2024-11-20 12:37:44.297253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-11-20 12:37:44.307102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.234 [2024-11-20 12:37:44.307185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.234 [2024-11-20 12:37:44.307199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.234 [2024-11-20 12:37:44.307206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.234 [2024-11-20 12:37:44.307213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.234 [2024-11-20 12:37:44.307228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-11-20 12:37:44.317154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.234 [2024-11-20 12:37:44.317206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.234 [2024-11-20 12:37:44.317219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.234 [2024-11-20 12:37:44.317226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.234 [2024-11-20 12:37:44.317233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.234 [2024-11-20 12:37:44.317248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-11-20 12:37:44.327254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.234 [2024-11-20 12:37:44.327327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.234 [2024-11-20 12:37:44.327341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.234 [2024-11-20 12:37:44.327348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.234 [2024-11-20 12:37:44.327356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.234 [2024-11-20 12:37:44.327371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.234 [2024-11-20 12:37:44.337277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.234 [2024-11-20 12:37:44.337341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.234 [2024-11-20 12:37:44.337355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.234 [2024-11-20 12:37:44.337362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.234 [2024-11-20 12:37:44.337368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.234 [2024-11-20 12:37:44.337383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.234 qpair failed and we were unable to recover it. 00:28:01.494 [2024-11-20 12:37:44.347281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.494 [2024-11-20 12:37:44.347343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.494 [2024-11-20 12:37:44.347358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.494 [2024-11-20 12:37:44.347365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.494 [2024-11-20 12:37:44.347372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.494 [2024-11-20 12:37:44.347387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-11-20 12:37:44.357327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.494 [2024-11-20 12:37:44.357383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.494 [2024-11-20 12:37:44.357396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.494 [2024-11-20 12:37:44.357403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.494 [2024-11-20 12:37:44.357410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.494 [2024-11-20 12:37:44.357426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-11-20 12:37:44.367367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.494 [2024-11-20 12:37:44.367425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.494 [2024-11-20 12:37:44.367438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.494 [2024-11-20 12:37:44.367446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.494 [2024-11-20 12:37:44.367453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.494 [2024-11-20 12:37:44.367468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-11-20 12:37:44.377319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.494 [2024-11-20 12:37:44.377376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.494 [2024-11-20 12:37:44.377389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.494 [2024-11-20 12:37:44.377400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.494 [2024-11-20 12:37:44.377407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.494 [2024-11-20 12:37:44.377422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.494 qpair failed and we were unable to recover it. 00:28:01.494 [2024-11-20 12:37:44.387421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.494 [2024-11-20 12:37:44.387476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.494 [2024-11-20 12:37:44.387490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.494 [2024-11-20 12:37:44.387497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.494 [2024-11-20 12:37:44.387503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.387518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.397444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.397498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.397512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.397518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.397525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.397540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.407487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.407556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.407570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.407578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.407585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.407600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.417498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.417555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.417569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.417577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.417583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.417601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.427526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.427580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.427594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.427601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.427608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.427623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.437556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.437618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.437631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.437639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.437645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.437660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.447588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.447648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.447662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.447669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.447675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.447690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.457622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.457687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.457701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.457708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.457715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.457730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.467662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.467723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.467737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.467745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.467751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.467766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.477713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.477767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.477781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.477789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.477795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.477811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.487709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.487767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.487781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.487788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.487795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.487810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.497668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.495 [2024-11-20 12:37:44.497731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.495 [2024-11-20 12:37:44.497744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.495 [2024-11-20 12:37:44.497752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.495 [2024-11-20 12:37:44.497758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.495 [2024-11-20 12:37:44.497773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.495 qpair failed and we were unable to recover it. 00:28:01.495 [2024-11-20 12:37:44.507816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.507870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.507888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.507895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.507902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.507918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.517782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.517835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.517849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.517856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.517863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.517877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.527825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.527882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.527895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.527902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.527909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.527924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.537843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.537896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.537910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.537917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.537924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.537939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.547876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.547931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.547945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.547956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.547966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.547982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.557902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.557959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.557975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.557983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.557990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.558008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.567941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.568015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.568029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.568037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.568044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.568059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.577966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.578023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.578036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.578043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.578050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.578066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.588005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.588076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.588091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.588098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.588104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.588119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.598009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.598066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.598081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.598088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.598094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.598110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.496 [2024-11-20 12:37:44.607978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.496 [2024-11-20 12:37:44.608036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.496 [2024-11-20 12:37:44.608050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.496 [2024-11-20 12:37:44.608057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.496 [2024-11-20 12:37:44.608064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.496 [2024-11-20 12:37:44.608079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.496 qpair failed and we were unable to recover it. 00:28:01.756 [2024-11-20 12:37:44.618125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.756 [2024-11-20 12:37:44.618186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.756 [2024-11-20 12:37:44.618201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.756 [2024-11-20 12:37:44.618208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.756 [2024-11-20 12:37:44.618215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.756 [2024-11-20 12:37:44.618231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.756 qpair failed and we were unable to recover it. 00:28:01.756 [2024-11-20 12:37:44.628103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.628166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.628180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.628188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.628194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.628209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.638124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.638176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.638193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.638201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.638208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.638222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.648159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.648213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.648227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.648235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.648242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.648257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.658196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.658264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.658278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.658285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.658291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.658306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.668229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.668293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.668306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.668313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.668320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.668335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.678407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.678460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.678474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.678480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.678490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.678505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.688283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.688338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.688351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.688358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.688365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.688380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.698236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.698302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.698317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.698325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.698331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.698346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.708334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.708387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.708401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.708408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.708415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.708430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.718381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.718430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.718445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.718452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.718459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.718474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.728336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.728392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.728407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.728416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.728423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.728439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.738416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.738496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.738511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.738519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.738527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.738543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.757 [2024-11-20 12:37:44.748494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.757 [2024-11-20 12:37:44.748547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.757 [2024-11-20 12:37:44.748561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.757 [2024-11-20 12:37:44.748569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.757 [2024-11-20 12:37:44.748576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.757 [2024-11-20 12:37:44.748590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.757 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.758508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.758571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.758587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.758594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.758601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.758 [2024-11-20 12:37:44.758617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.768505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.768560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.768577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.768584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.768591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff898000b90 00:28:01.758 [2024-11-20 12:37:44.768606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.778551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.778651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.778709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.778735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.778757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff894000b90 00:28:01.758 [2024-11-20 12:37:44.778809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.788594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.788681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.788710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.788725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.788740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff894000b90 00:28:01.758 [2024-11-20 12:37:44.788772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.798630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.798727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.798786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.798812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.798835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d57ba0 00:28:01.758 [2024-11-20 12:37:44.798887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.808645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.808726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.808757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.808782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.808796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d57ba0 00:28:01.758 [2024-11-20 12:37:44.808828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.818708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.818845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.818901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.818927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.818961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8a0000b90 00:28:01.758 [2024-11-20 12:37:44.819015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.828665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:01.758 [2024-11-20 12:37:44.828739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:01.758 [2024-11-20 12:37:44.828767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:01.758 [2024-11-20 12:37:44.828782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:01.758 [2024-11-20 12:37:44.828795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff8a0000b90 00:28:01.758 [2024-11-20 12:37:44.828826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.758 qpair failed and we were unable to recover it. 00:28:01.758 [2024-11-20 12:37:44.828942] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:01.758 A controller has encountered a failure and is being reset. 00:28:01.758 Controller properly reset. 00:28:01.758 Initializing NVMe Controllers 00:28:01.758 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:01.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:01.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:01.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:01.758 Initialization complete. Launching workers. 00:28:01.758 Starting thread on core 1 00:28:01.758 Starting thread on core 2 00:28:01.758 Starting thread on core 3 00:28:01.758 Starting thread on core 0 00:28:01.758 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:01.758 00:28:01.758 real 0m10.749s 00:28:01.758 user 0m19.352s 00:28:01.758 sys 0m4.679s 00:28:01.758 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.758 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.758 ************************************ 00:28:01.758 END TEST nvmf_target_disconnect_tc2 00:28:01.758 ************************************ 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.017 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.017 rmmod nvme_tcp 00:28:02.017 rmmod nvme_fabrics 00:28:02.017 rmmod nvme_keyring 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 602925 ']' 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 602925 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 602925 ']' 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 602925 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.018 12:37:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602925 00:28:02.018 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:02.018 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:02.018 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602925' 00:28:02.018 killing process with pid 602925 00:28:02.018 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 602925 00:28:02.018 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 602925 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.277 12:37:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.184 12:37:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.184 00:28:04.184 real 0m19.522s 00:28:04.184 user 0m46.808s 00:28:04.184 sys 0m9.570s 00:28:04.184 12:37:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.184 12:37:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:04.184 ************************************ 00:28:04.184 END TEST nvmf_target_disconnect 00:28:04.184 ************************************ 00:28:04.184 12:37:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:04.184 00:28:04.184 real 5m51.763s 00:28:04.184 user 10m32.194s 00:28:04.184 sys 1m58.675s 00:28:04.184 12:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.443 12:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.443 ************************************ 00:28:04.443 END TEST nvmf_host 00:28:04.443 ************************************ 00:28:04.444 12:37:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:04.444 12:37:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:04.444 12:37:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:04.444 12:37:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:04.444 12:37:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.444 12:37:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.444 ************************************ 00:28:04.444 START TEST nvmf_target_core_interrupt_mode 00:28:04.444 ************************************ 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:04.444 * Looking for test storage... 00:28:04.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.444 --rc genhtml_branch_coverage=1 00:28:04.444 --rc genhtml_function_coverage=1 00:28:04.444 --rc genhtml_legend=1 00:28:04.444 --rc geninfo_all_blocks=1 00:28:04.444 --rc geninfo_unexecuted_blocks=1 00:28:04.444 00:28:04.444 ' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.444 --rc genhtml_branch_coverage=1 00:28:04.444 --rc genhtml_function_coverage=1 00:28:04.444 --rc genhtml_legend=1 00:28:04.444 --rc geninfo_all_blocks=1 00:28:04.444 --rc geninfo_unexecuted_blocks=1 00:28:04.444 00:28:04.444 ' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.444 --rc genhtml_branch_coverage=1 00:28:04.444 --rc genhtml_function_coverage=1 00:28:04.444 --rc genhtml_legend=1 00:28:04.444 --rc geninfo_all_blocks=1 00:28:04.444 --rc geninfo_unexecuted_blocks=1 00:28:04.444 00:28:04.444 ' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.444 --rc genhtml_branch_coverage=1 00:28:04.444 --rc genhtml_function_coverage=1 00:28:04.444 --rc genhtml_legend=1 00:28:04.444 --rc geninfo_all_blocks=1 00:28:04.444 --rc geninfo_unexecuted_blocks=1 00:28:04.444 00:28:04.444 ' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.444 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:04.705 ************************************ 00:28:04.705 START TEST nvmf_abort 00:28:04.705 ************************************ 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:04.705 * Looking for test storage... 00:28:04.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:04.705 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.706 --rc genhtml_branch_coverage=1 00:28:04.706 --rc genhtml_function_coverage=1 00:28:04.706 --rc genhtml_legend=1 00:28:04.706 --rc geninfo_all_blocks=1 00:28:04.706 --rc geninfo_unexecuted_blocks=1 00:28:04.706 00:28:04.706 ' 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.706 --rc genhtml_branch_coverage=1 00:28:04.706 --rc genhtml_function_coverage=1 00:28:04.706 --rc genhtml_legend=1 00:28:04.706 --rc geninfo_all_blocks=1 00:28:04.706 --rc geninfo_unexecuted_blocks=1 00:28:04.706 00:28:04.706 ' 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.706 --rc genhtml_branch_coverage=1 00:28:04.706 --rc genhtml_function_coverage=1 00:28:04.706 --rc genhtml_legend=1 00:28:04.706 --rc geninfo_all_blocks=1 00:28:04.706 --rc geninfo_unexecuted_blocks=1 00:28:04.706 00:28:04.706 ' 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.706 --rc genhtml_branch_coverage=1 00:28:04.706 --rc genhtml_function_coverage=1 00:28:04.706 --rc genhtml_legend=1 00:28:04.706 --rc geninfo_all_blocks=1 00:28:04.706 --rc geninfo_unexecuted_blocks=1 00:28:04.706 00:28:04.706 ' 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:04.706 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.966 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:11.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:11.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:11.538 Found net devices under 0000:86:00.0: cvl_0_0 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.538 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:11.539 Found net devices under 0000:86:00.1: cvl_0_1 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:28:11.539 00:28:11.539 --- 10.0.0.2 ping statistics --- 00:28:11.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.539 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:28:11.539 00:28:11.539 --- 10.0.0.1 ping statistics --- 00:28:11.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.539 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=607499 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 607499 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 607499 ']' 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.539 [2024-11-20 12:37:53.731564] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:11.539 [2024-11-20 12:37:53.732459] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:28:11.539 [2024-11-20 12:37:53.732490] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.539 [2024-11-20 12:37:53.810840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.539 [2024-11-20 12:37:53.852742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.539 [2024-11-20 12:37:53.852780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.539 [2024-11-20 12:37:53.852788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.539 [2024-11-20 12:37:53.852794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.539 [2024-11-20 12:37:53.852799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.539 [2024-11-20 12:37:53.854129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.539 [2024-11-20 12:37:53.854219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.539 [2024-11-20 12:37:53.854220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.539 [2024-11-20 12:37:53.922333] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:11.539 [2024-11-20 12:37:53.922345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:11.539 [2024-11-20 12:37:53.922912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:11.539 [2024-11-20 12:37:53.923150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.539 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.540 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:11.540 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 [2024-11-20 12:37:53.991037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 Malloc0 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 Delay0 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 [2024-11-20 12:37:54.091032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.540 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:11.540 [2024-11-20 12:37:54.260130] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:13.447 Initializing NVMe Controllers 00:28:13.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:13.447 controller IO queue size 128 less than required 00:28:13.447 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:13.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:13.447 Initialization complete. Launching workers. 00:28:13.447 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36930 00:28:13.447 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36987, failed to submit 66 00:28:13.447 success 36930, unsuccessful 57, failed 0 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.447 rmmod nvme_tcp 00:28:13.447 rmmod nvme_fabrics 00:28:13.447 rmmod nvme_keyring 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 607499 ']' 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 607499 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 607499 ']' 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 607499 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 607499 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 607499' 00:28:13.447 killing process with pid 607499 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 607499 00:28:13.447 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 607499 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.705 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.611 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.871 00:28:15.871 real 0m11.109s 00:28:15.871 user 0m10.501s 00:28:15.871 sys 0m5.642s 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.871 ************************************ 00:28:15.871 END TEST nvmf_abort 00:28:15.871 ************************************ 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:15.871 ************************************ 00:28:15.871 START TEST nvmf_ns_hotplug_stress 00:28:15.871 ************************************ 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:15.871 * Looking for test storage... 00:28:15.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.871 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.872 --rc genhtml_branch_coverage=1 00:28:15.872 --rc genhtml_function_coverage=1 00:28:15.872 --rc genhtml_legend=1 00:28:15.872 --rc geninfo_all_blocks=1 00:28:15.872 --rc geninfo_unexecuted_blocks=1 00:28:15.872 00:28:15.872 ' 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.872 --rc genhtml_branch_coverage=1 00:28:15.872 --rc genhtml_function_coverage=1 00:28:15.872 --rc genhtml_legend=1 00:28:15.872 --rc geninfo_all_blocks=1 00:28:15.872 --rc geninfo_unexecuted_blocks=1 00:28:15.872 00:28:15.872 ' 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.872 --rc genhtml_branch_coverage=1 00:28:15.872 --rc genhtml_function_coverage=1 00:28:15.872 --rc genhtml_legend=1 00:28:15.872 --rc geninfo_all_blocks=1 00:28:15.872 --rc geninfo_unexecuted_blocks=1 00:28:15.872 00:28:15.872 ' 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:15.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.872 --rc genhtml_branch_coverage=1 00:28:15.872 --rc genhtml_function_coverage=1 00:28:15.872 --rc genhtml_legend=1 00:28:15.872 --rc geninfo_all_blocks=1 00:28:15.872 --rc geninfo_unexecuted_blocks=1 00:28:15.872 00:28:15.872 ' 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.872 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.132 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.132 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.133 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:22.705 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:22.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.705 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:22.706 Found net devices under 0000:86:00.0: cvl_0_0 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:22.706 Found net devices under 0000:86:00.1: cvl_0_1 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:28:22.706 00:28:22.706 --- 10.0.0.2 ping statistics --- 00:28:22.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.706 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:28:22.706 00:28:22.706 --- 10.0.0.1 ping statistics --- 00:28:22.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.706 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=611493 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 611493 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 611493 ']' 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.706 12:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.706 [2024-11-20 12:38:04.930096] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:22.706 [2024-11-20 12:38:04.931103] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:28:22.706 [2024-11-20 12:38:04.931143] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.706 [2024-11-20 12:38:05.010417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:22.706 [2024-11-20 12:38:05.052979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.706 [2024-11-20 12:38:05.053016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.706 [2024-11-20 12:38:05.053023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.706 [2024-11-20 12:38:05.053029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.706 [2024-11-20 12:38:05.053034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.706 [2024-11-20 12:38:05.054462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.706 [2024-11-20 12:38:05.054571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.707 [2024-11-20 12:38:05.054571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.707 [2024-11-20 12:38:05.122027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:22.707 [2024-11-20 12:38:05.122900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:22.707 [2024-11-20 12:38:05.123081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:22.707 [2024-11-20 12:38:05.123222] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:22.707 [2024-11-20 12:38:05.359419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.707 [2024-11-20 12:38:05.767781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.707 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.966 12:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:23.225 Malloc0 00:28:23.225 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:23.483 Delay0 00:28:23.483 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.483 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:23.742 NULL1 00:28:23.742 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:24.001 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:24.001 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=611759 00:28:24.001 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:24.001 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.379 Read completed with error (sct=0, sc=11) 00:28:25.379 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:25.379 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:25.379 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:25.638 true 00:28:25.638 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:25.638 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.575 12:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.575 12:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:26.575 12:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:26.834 true 00:28:26.834 12:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:26.834 12:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.093 12:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.093 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:27.093 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:27.352 true 00:28:27.352 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:27.352 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:28.730 12:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.730 12:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:28.730 12:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:28.730 true 00:28:28.730 12:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:28.730 12:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.989 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.248 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:29.248 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:29.507 true 00:28:29.507 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:29.507 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.705 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.705 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:30.705 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:30.964 true 00:28:30.964 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:30.964 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.901 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.901 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:31.901 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:32.160 true 00:28:32.160 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:32.160 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.419 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.678 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:32.678 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:32.678 true 00:28:32.678 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:32.678 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.057 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.057 12:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:34.057 12:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:34.316 true 00:28:34.316 12:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:34.316 12:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.253 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.253 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:35.253 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:35.513 true 00:28:35.513 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:35.513 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.772 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.772 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:35.772 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:36.031 true 00:28:36.031 12:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:36.032 12:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.408 12:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.408 12:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:37.408 12:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:37.667 true 00:28:37.667 12:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:37.667 12:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.604 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.604 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:38.604 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:38.862 true 00:28:38.862 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:38.862 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.121 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.380 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:39.380 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:39.380 true 00:28:39.380 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:39.380 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.757 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:40.757 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:41.016 true 00:28:41.016 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:41.016 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.994 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.994 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:41.994 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:42.315 true 00:28:42.315 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:42.315 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:42.574 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:42.833 true 00:28:42.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:42.833 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.027 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:44.027 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:44.027 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:44.285 true 00:28:44.285 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:44.285 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.217 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.476 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:45.476 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:45.476 true 00:28:45.476 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:45.476 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.734 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.993 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:45.993 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:46.252 true 00:28:46.252 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:46.252 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.186 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.445 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:47.445 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:47.703 true 00:28:47.703 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:47.703 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.638 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.638 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:48.638 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:48.897 true 00:28:48.897 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:48.897 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.155 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.155 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:49.156 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:49.414 true 00:28:49.414 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:49.414 12:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.791 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.791 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:50.791 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:50.791 true 00:28:50.791 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:50.791 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.049 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.308 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:51.308 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:51.566 true 00:28:51.566 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:51.566 12:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.503 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.762 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:52.762 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:52.762 true 00:28:53.019 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:53.020 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.020 12:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.277 12:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:53.277 12:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:53.535 true 00:28:53.535 12:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:53.535 12:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.468 Initializing NVMe Controllers 00:28:54.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:54.468 Controller IO queue size 128, less than required. 00:28:54.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.468 Controller IO queue size 128, less than required. 00:28:54.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:54.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:54.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:54.468 Initialization complete. Launching workers. 00:28:54.468 ======================================================== 00:28:54.468 Latency(us) 00:28:54.468 Device Information : IOPS MiB/s Average min max 00:28:54.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1724.83 0.84 50548.12 2656.08 1013449.95 00:28:54.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17365.58 8.48 7370.53 1601.52 385428.56 00:28:54.468 ======================================================== 00:28:54.468 Total : 19090.40 9.32 11271.65 1601.52 1013449.95 00:28:54.468 00:28:54.468 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.725 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:54.725 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:54.982 true 00:28:54.982 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 611759 00:28:54.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (611759) - No such process 00:28:54.982 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 611759 00:28:54.982 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.240 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:55.240 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:55.240 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:55.240 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:55.240 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.240 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:55.498 null0 00:28:55.498 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.498 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.498 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:55.755 null1 00:28:55.755 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.755 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.755 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:56.013 null2 00:28:56.013 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.013 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.013 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:56.013 null3 00:28:56.013 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.013 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.013 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:56.276 null4 00:28:56.276 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.276 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.276 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:56.536 null5 00:28:56.536 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.536 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.536 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:56.796 null6 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:56.796 null7 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.796 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 617096 617097 617099 617102 617103 617105 617107 617109 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.797 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.056 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.315 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.574 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.834 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.835 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.835 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.094 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.094 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.094 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.094 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.094 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.095 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.095 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.095 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.095 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.095 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.095 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.353 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.353 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.353 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.354 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.354 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.354 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.354 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.354 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.613 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.871 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.871 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.871 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.872 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.131 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.131 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.131 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.131 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.131 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.131 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.131 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.390 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.649 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.908 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.908 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.909 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.909 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.909 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.168 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.427 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.686 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.687 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:00.946 12:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.946 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.946 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.946 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.946 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:00.946 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:00.946 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.205 rmmod nvme_tcp 00:29:01.205 rmmod nvme_fabrics 00:29:01.205 rmmod nvme_keyring 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 611493 ']' 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 611493 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 611493 ']' 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 611493 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 611493 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 611493' 00:29:01.205 killing process with pid 611493 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 611493 00:29:01.205 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 611493 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.465 12:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.371 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:03.371 00:29:03.371 real 0m47.639s 00:29:03.371 user 2m57.593s 00:29:03.371 sys 0m20.137s 00:29:03.371 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.371 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:03.371 ************************************ 00:29:03.371 END TEST nvmf_ns_hotplug_stress 00:29:03.371 ************************************ 00:29:03.372 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:03.372 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:03.372 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.372 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:03.631 ************************************ 00:29:03.632 START TEST nvmf_delete_subsystem 00:29:03.632 ************************************ 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:03.632 * Looking for test storage... 00:29:03.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.632 --rc genhtml_branch_coverage=1 00:29:03.632 --rc genhtml_function_coverage=1 00:29:03.632 --rc genhtml_legend=1 00:29:03.632 --rc geninfo_all_blocks=1 00:29:03.632 --rc geninfo_unexecuted_blocks=1 00:29:03.632 00:29:03.632 ' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.632 --rc genhtml_branch_coverage=1 00:29:03.632 --rc genhtml_function_coverage=1 00:29:03.632 --rc genhtml_legend=1 00:29:03.632 --rc geninfo_all_blocks=1 00:29:03.632 --rc geninfo_unexecuted_blocks=1 00:29:03.632 00:29:03.632 ' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.632 --rc genhtml_branch_coverage=1 00:29:03.632 --rc genhtml_function_coverage=1 00:29:03.632 --rc genhtml_legend=1 00:29:03.632 --rc geninfo_all_blocks=1 00:29:03.632 --rc geninfo_unexecuted_blocks=1 00:29:03.632 00:29:03.632 ' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:03.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.632 --rc genhtml_branch_coverage=1 00:29:03.632 --rc genhtml_function_coverage=1 00:29:03.632 --rc genhtml_legend=1 00:29:03.632 --rc geninfo_all_blocks=1 00:29:03.632 --rc geninfo_unexecuted_blocks=1 00:29:03.632 00:29:03.632 ' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.632 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:03.633 12:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.203 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:10.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:10.204 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:10.204 Found net devices under 0000:86:00.0: cvl_0_0 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:10.204 Found net devices under 0000:86:00.1: cvl_0_1 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.204 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:29:10.205 00:29:10.205 --- 10.0.0.2 ping statistics --- 00:29:10.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.205 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:10.205 00:29:10.205 --- 10.0.0.1 ping statistics --- 00:29:10.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.205 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=621463 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 621463 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 621463 ']' 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.205 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.205 [2024-11-20 12:38:52.681802] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:10.205 [2024-11-20 12:38:52.682744] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:29:10.205 [2024-11-20 12:38:52.682777] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.205 [2024-11-20 12:38:52.763587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:10.205 [2024-11-20 12:38:52.806870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.205 [2024-11-20 12:38:52.806905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.205 [2024-11-20 12:38:52.806913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.205 [2024-11-20 12:38:52.806919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.205 [2024-11-20 12:38:52.806924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.205 [2024-11-20 12:38:52.808090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.205 [2024-11-20 12:38:52.808091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.205 [2024-11-20 12:38:52.876238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:10.205 [2024-11-20 12:38:52.876827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:10.205 [2024-11-20 12:38:52.877013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.465 [2024-11-20 12:38:53.564885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.465 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.723 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.723 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 [2024-11-20 12:38:53.593274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 NULL1 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 Delay0 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=621706 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:10.724 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:10.724 [2024-11-20 12:38:53.709589] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:12.627 12:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.627 12:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.627 12:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 [2024-11-20 12:38:55.784260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e7680 is same with the state(6) to be set 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 starting I/O failed: -6 00:29:12.886 Read completed with error (sct=0, sc=8) 00:29:12.886 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Write completed with error (sct=0, sc=8) 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 Read completed with error (sct=0, sc=8) 00:29:12.887 starting I/O failed: -6 00:29:12.887 [2024-11-20 12:38:55.788668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd5c8000c40 is same with the state(6) to be set 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:12.887 starting I/O failed: -6 00:29:13.825 [2024-11-20 12:38:56.763279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e89a0 is same with the state(6) to be set 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 [2024-11-20 12:38:56.787471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e72c0 is same with the state(6) to be set 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 [2024-11-20 12:38:56.787823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e74a0 is same with the state(6) to be set 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Write completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.825 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 [2024-11-20 12:38:56.790770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd5c800d350 is same with the state(6) to be set 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Write completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 Read completed with error (sct=0, sc=8) 00:29:13.826 [2024-11-20 12:38:56.791306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd5c800d7e0 is same with the state(6) to be set 00:29:13.826 Initializing NVMe Controllers 00:29:13.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.826 Controller IO queue size 128, less than required. 00:29:13.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:13.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:13.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:13.826 Initialization complete. Launching workers. 00:29:13.826 ======================================================== 00:29:13.826 Latency(us) 00:29:13.826 Device Information : IOPS MiB/s Average min max 00:29:13.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.15 0.09 879544.64 332.92 1006654.05 00:29:13.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.62 0.09 914541.69 308.29 1010225.65 00:29:13.826 ======================================================== 00:29:13.826 Total : 360.76 0.18 897356.93 308.29 1010225.65 00:29:13.826 00:29:13.826 [2024-11-20 12:38:56.791904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e89a0 (9): Bad file descriptor 00:29:13.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:13.826 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.826 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:13.826 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 621706 00:29:13.826 12:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 621706 00:29:14.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (621706) - No such process 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 621706 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 621706 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 621706 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:14.397 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:14.398 [2024-11-20 12:38:57.321149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=622265 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:14.398 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.398 [2024-11-20 12:38:57.407982] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:14.964 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.964 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:14.964 12:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.531 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.531 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:15.531 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.789 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.789 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:15.789 12:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:16.354 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:16.354 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:16.354 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:16.920 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:16.920 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:16.920 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:17.487 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:17.487 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:17.487 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:17.746 Initializing NVMe Controllers 00:29:17.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.746 Controller IO queue size 128, less than required. 00:29:17.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:17.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:17.746 Initialization complete. Launching workers. 00:29:17.746 ======================================================== 00:29:17.746 Latency(us) 00:29:17.746 Device Information : IOPS MiB/s Average min max 00:29:17.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002144.84 1000150.11 1040713.87 00:29:17.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003786.79 1000134.44 1009938.24 00:29:17.746 ======================================================== 00:29:17.746 Total : 256.00 0.12 1002965.82 1000134.44 1040713.87 00:29:17.746 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 622265 00:29:18.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (622265) - No such process 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 622265 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.006 rmmod nvme_tcp 00:29:18.006 rmmod nvme_fabrics 00:29:18.006 rmmod nvme_keyring 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 621463 ']' 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 621463 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 621463 ']' 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 621463 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.006 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 621463 00:29:18.006 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.006 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.006 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 621463' 00:29:18.006 killing process with pid 621463 00:29:18.006 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 621463 00:29:18.006 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 621463 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.265 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.171 00:29:20.171 real 0m16.735s 00:29:20.171 user 0m26.270s 00:29:20.171 sys 0m6.131s 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:20.171 ************************************ 00:29:20.171 END TEST nvmf_delete_subsystem 00:29:20.171 ************************************ 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.171 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:20.431 ************************************ 00:29:20.431 START TEST nvmf_host_management 00:29:20.431 ************************************ 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:20.431 * Looking for test storage... 00:29:20.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.431 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:20.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.431 --rc genhtml_branch_coverage=1 00:29:20.431 --rc genhtml_function_coverage=1 00:29:20.432 --rc genhtml_legend=1 00:29:20.432 --rc geninfo_all_blocks=1 00:29:20.432 --rc geninfo_unexecuted_blocks=1 00:29:20.432 00:29:20.432 ' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:20.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.432 --rc genhtml_branch_coverage=1 00:29:20.432 --rc genhtml_function_coverage=1 00:29:20.432 --rc genhtml_legend=1 00:29:20.432 --rc geninfo_all_blocks=1 00:29:20.432 --rc geninfo_unexecuted_blocks=1 00:29:20.432 00:29:20.432 ' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:20.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.432 --rc genhtml_branch_coverage=1 00:29:20.432 --rc genhtml_function_coverage=1 00:29:20.432 --rc genhtml_legend=1 00:29:20.432 --rc geninfo_all_blocks=1 00:29:20.432 --rc geninfo_unexecuted_blocks=1 00:29:20.432 00:29:20.432 ' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:20.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.432 --rc genhtml_branch_coverage=1 00:29:20.432 --rc genhtml_function_coverage=1 00:29:20.432 --rc genhtml_legend=1 00:29:20.432 --rc geninfo_all_blocks=1 00:29:20.432 --rc geninfo_unexecuted_blocks=1 00:29:20.432 00:29:20.432 ' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.432 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.005 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:27.006 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:27.006 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:27.006 Found net devices under 0000:86:00.0: cvl_0_0 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:27.006 Found net devices under 0000:86:00.1: cvl_0_1 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:29:27.006 00:29:27.006 --- 10.0.0.2 ping statistics --- 00:29:27.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.006 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:29:27.006 00:29:27.006 --- 10.0.0.1 ping statistics --- 00:29:27.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.006 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=626793 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 626793 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 626793 ']' 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.006 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.007 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.007 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.007 12:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.007 [2024-11-20 12:39:09.520859] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:27.007 [2024-11-20 12:39:09.521907] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:29:27.007 [2024-11-20 12:39:09.521957] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.007 [2024-11-20 12:39:09.601770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.007 [2024-11-20 12:39:09.644203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.007 [2024-11-20 12:39:09.644240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.007 [2024-11-20 12:39:09.644248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.007 [2024-11-20 12:39:09.644255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.007 [2024-11-20 12:39:09.644260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.007 [2024-11-20 12:39:09.645734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.007 [2024-11-20 12:39:09.645842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.007 [2024-11-20 12:39:09.645866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.007 [2024-11-20 12:39:09.645866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:27.007 [2024-11-20 12:39:09.714962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:27.007 [2024-11-20 12:39:09.714990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:27.007 [2024-11-20 12:39:09.715826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:27.007 [2024-11-20 12:39:09.716152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:27.007 [2024-11-20 12:39:09.716194] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:27.266 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.266 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:27.266 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.266 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.266 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.525 [2024-11-20 12:39:10.402707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.525 Malloc0 00:29:27.525 [2024-11-20 12:39:10.494877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=627107 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 627107 /var/tmp/bdevperf.sock 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 627107 ']' 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:27.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.525 { 00:29:27.525 "params": { 00:29:27.525 "name": "Nvme$subsystem", 00:29:27.525 "trtype": "$TEST_TRANSPORT", 00:29:27.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.525 "adrfam": "ipv4", 00:29:27.525 "trsvcid": "$NVMF_PORT", 00:29:27.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.525 "hdgst": ${hdgst:-false}, 00:29:27.525 "ddgst": ${ddgst:-false} 00:29:27.525 }, 00:29:27.525 "method": "bdev_nvme_attach_controller" 00:29:27.525 } 00:29:27.525 EOF 00:29:27.525 )") 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:27.525 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.525 "params": { 00:29:27.525 "name": "Nvme0", 00:29:27.525 "trtype": "tcp", 00:29:27.525 "traddr": "10.0.0.2", 00:29:27.525 "adrfam": "ipv4", 00:29:27.525 "trsvcid": "4420", 00:29:27.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:27.525 "hdgst": false, 00:29:27.525 "ddgst": false 00:29:27.526 }, 00:29:27.526 "method": "bdev_nvme_attach_controller" 00:29:27.526 }' 00:29:27.526 [2024-11-20 12:39:10.587573] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:29:27.526 [2024-11-20 12:39:10.587622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627107 ] 00:29:27.785 [2024-11-20 12:39:10.661838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.785 [2024-11-20 12:39:10.703628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.785 Running I/O for 10 seconds... 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=94 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 94 -ge 100 ']' 00:29:28.045 12:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.307 [2024-11-20 12:39:11.298452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 [2024-11-20 12:39:11.298676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bec0 is same with the state(6) to be set 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.307 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:28.307 [2024-11-20 12:39:11.305386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.307 [2024-11-20 12:39:11.305642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.307 [2024-11-20 12:39:11.305648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.305988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.305996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.308 [2024-11-20 12:39:11.306243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.308 [2024-11-20 12:39:11.306249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.306392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.309 [2024-11-20 12:39:11.306398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.309 [2024-11-20 12:39:11.307366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:28.309 task offset: 98304 on job bdev=Nvme0n1 fails 00:29:28.309 00:29:28.309 Latency(us) 00:29:28.309 [2024-11-20T11:39:11.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.309 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.309 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:28.309 Verification LBA range: start 0x0 length 0x400 00:29:28.309 Nvme0n1 : 0.41 1888.50 118.03 157.38 0.00 30437.80 1595.66 27354.16 00:29:28.309 [2024-11-20T11:39:11.425Z] =================================================================================================================== 00:29:28.309 [2024-11-20T11:39:11.425Z] Total : 1888.50 118.03 157.38 0.00 30437.80 1595.66 27354.16 00:29:28.309 [2024-11-20 12:39:11.309779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:28.309 [2024-11-20 12:39:11.309805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6f500 (9): Bad file descriptor 00:29:28.309 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.309 12:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:28.309 [2024-11-20 12:39:11.362298] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 627107 00:29:29.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (627107) - No such process 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.246 { 00:29:29.246 "params": { 00:29:29.246 "name": "Nvme$subsystem", 00:29:29.246 "trtype": "$TEST_TRANSPORT", 00:29:29.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.246 "adrfam": "ipv4", 00:29:29.246 "trsvcid": "$NVMF_PORT", 00:29:29.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.246 "hdgst": ${hdgst:-false}, 00:29:29.246 "ddgst": ${ddgst:-false} 00:29:29.246 }, 00:29:29.246 "method": "bdev_nvme_attach_controller" 00:29:29.246 } 00:29:29.246 EOF 00:29:29.246 )") 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:29.246 12:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:29.246 "params": { 00:29:29.246 "name": "Nvme0", 00:29:29.246 "trtype": "tcp", 00:29:29.246 "traddr": "10.0.0.2", 00:29:29.246 "adrfam": "ipv4", 00:29:29.246 "trsvcid": "4420", 00:29:29.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.246 "hdgst": false, 00:29:29.246 "ddgst": false 00:29:29.246 }, 00:29:29.246 "method": "bdev_nvme_attach_controller" 00:29:29.246 }' 00:29:29.504 [2024-11-20 12:39:12.371848] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:29:29.504 [2024-11-20 12:39:12.371896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627414 ] 00:29:29.504 [2024-11-20 12:39:12.448482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.504 [2024-11-20 12:39:12.489571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.763 Running I/O for 1 seconds... 00:29:30.699 1984.00 IOPS, 124.00 MiB/s 00:29:30.699 Latency(us) 00:29:30.699 [2024-11-20T11:39:13.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:30.699 Verification LBA range: start 0x0 length 0x400 00:29:30.699 Nvme0n1 : 1.01 2022.27 126.39 0.00 0.00 31146.91 4644.51 27240.18 00:29:30.699 [2024-11-20T11:39:13.815Z] =================================================================================================================== 00:29:30.699 [2024-11-20T11:39:13.815Z] Total : 2022.27 126.39 0.00 0.00 31146.91 4644.51 27240.18 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.958 rmmod nvme_tcp 00:29:30.958 rmmod nvme_fabrics 00:29:30.958 rmmod nvme_keyring 00:29:30.958 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 626793 ']' 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 626793 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 626793 ']' 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 626793 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 626793 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 626793' 00:29:30.958 killing process with pid 626793 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 626793 00:29:30.958 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 626793 00:29:31.292 [2024-11-20 12:39:14.219171] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.292 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:33.275 00:29:33.275 real 0m13.001s 00:29:33.275 user 0m18.177s 00:29:33.275 sys 0m6.366s 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:33.275 ************************************ 00:29:33.275 END TEST nvmf_host_management 00:29:33.275 ************************************ 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.275 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:33.535 ************************************ 00:29:33.535 START TEST nvmf_lvol 00:29:33.535 ************************************ 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:33.535 * Looking for test storage... 00:29:33.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:33.535 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:33.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.536 --rc genhtml_branch_coverage=1 00:29:33.536 --rc genhtml_function_coverage=1 00:29:33.536 --rc genhtml_legend=1 00:29:33.536 --rc geninfo_all_blocks=1 00:29:33.536 --rc geninfo_unexecuted_blocks=1 00:29:33.536 00:29:33.536 ' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:33.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.536 --rc genhtml_branch_coverage=1 00:29:33.536 --rc genhtml_function_coverage=1 00:29:33.536 --rc genhtml_legend=1 00:29:33.536 --rc geninfo_all_blocks=1 00:29:33.536 --rc geninfo_unexecuted_blocks=1 00:29:33.536 00:29:33.536 ' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:33.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.536 --rc genhtml_branch_coverage=1 00:29:33.536 --rc genhtml_function_coverage=1 00:29:33.536 --rc genhtml_legend=1 00:29:33.536 --rc geninfo_all_blocks=1 00:29:33.536 --rc geninfo_unexecuted_blocks=1 00:29:33.536 00:29:33.536 ' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:33.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.536 --rc genhtml_branch_coverage=1 00:29:33.536 --rc genhtml_function_coverage=1 00:29:33.536 --rc genhtml_legend=1 00:29:33.536 --rc geninfo_all_blocks=1 00:29:33.536 --rc geninfo_unexecuted_blocks=1 00:29:33.536 00:29:33.536 ' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.536 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.537 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.537 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.537 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.537 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.537 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:40.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:40.108 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:40.108 Found net devices under 0000:86:00.0: cvl_0_0 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:40.108 Found net devices under 0000:86:00.1: cvl_0_1 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.108 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:29:40.109 00:29:40.109 --- 10.0.0.2 ping statistics --- 00:29:40.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.109 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:29:40.109 00:29:40.109 --- 10.0.0.1 ping statistics --- 00:29:40.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.109 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=631173 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 631173 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 631173 ']' 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:40.109 [2024-11-20 12:39:22.585454] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:40.109 [2024-11-20 12:39:22.586447] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:29:40.109 [2024-11-20 12:39:22.586483] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.109 [2024-11-20 12:39:22.665488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.109 [2024-11-20 12:39:22.707584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.109 [2024-11-20 12:39:22.707622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.109 [2024-11-20 12:39:22.707629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.109 [2024-11-20 12:39:22.707636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.109 [2024-11-20 12:39:22.707641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.109 [2024-11-20 12:39:22.708982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.109 [2024-11-20 12:39:22.709060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.109 [2024-11-20 12:39:22.709061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.109 [2024-11-20 12:39:22.777299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:40.109 [2024-11-20 12:39:22.778099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:40.109 [2024-11-20 12:39:22.778257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:40.109 [2024-11-20 12:39:22.778425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.109 12:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:40.109 [2024-11-20 12:39:23.021850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.109 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:40.368 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:40.369 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:40.627 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:40.628 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:40.628 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:40.887 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=546127a9-c34f-4be6-879d-38277269a679 00:29:40.887 12:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 546127a9-c34f-4be6-879d-38277269a679 lvol 20 00:29:41.146 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=44343f14-6037-494e-9e81-c5b6e11c81a9 00:29:41.146 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:41.404 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44343f14-6037-494e-9e81-c5b6e11c81a9 00:29:41.405 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:41.663 [2024-11-20 12:39:24.657758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.663 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:41.921 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=631511 00:29:41.921 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:41.921 12:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:42.858 12:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 44343f14-6037-494e-9e81-c5b6e11c81a9 MY_SNAPSHOT 00:29:43.117 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1e1c27f4-9911-4026-912b-f8bec9f587d0 00:29:43.117 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 44343f14-6037-494e-9e81-c5b6e11c81a9 30 00:29:43.376 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1e1c27f4-9911-4026-912b-f8bec9f587d0 MY_CLONE 00:29:43.635 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=18f6ac26-c4a6-4a54-af43-4ba978b104ec 00:29:43.635 12:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 18f6ac26-c4a6-4a54-af43-4ba978b104ec 00:29:44.202 12:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 631511 00:29:52.320 Initializing NVMe Controllers 00:29:52.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:52.320 Controller IO queue size 128, less than required. 00:29:52.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:52.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:52.320 Initialization complete. Launching workers. 00:29:52.320 ======================================================== 00:29:52.320 Latency(us) 00:29:52.320 Device Information : IOPS MiB/s Average min max 00:29:52.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12258.10 47.88 10443.14 1565.37 49516.70 00:29:52.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12152.20 47.47 10535.92 1551.77 48918.07 00:29:52.320 ======================================================== 00:29:52.320 Total : 24410.30 95.35 10489.33 1551.77 49516.70 00:29:52.320 00:29:52.320 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:52.578 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44343f14-6037-494e-9e81-c5b6e11c81a9 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 546127a9-c34f-4be6-879d-38277269a679 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.837 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.837 rmmod nvme_tcp 00:29:52.837 rmmod nvme_fabrics 00:29:52.837 rmmod nvme_keyring 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 631173 ']' 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 631173 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 631173 ']' 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 631173 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.095 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631173 00:29:53.095 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.095 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.095 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631173' 00:29:53.095 killing process with pid 631173 00:29:53.095 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 631173 00:29:53.095 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 631173 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.353 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.257 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.258 00:29:55.258 real 0m21.897s 00:29:55.258 user 0m55.711s 00:29:55.258 sys 0m9.840s 00:29:55.258 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.258 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:55.258 ************************************ 00:29:55.258 END TEST nvmf_lvol 00:29:55.258 ************************************ 00:29:55.258 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:55.258 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:55.258 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.258 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:55.258 ************************************ 00:29:55.258 START TEST nvmf_lvs_grow 00:29:55.258 ************************************ 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:55.518 * Looking for test storage... 00:29:55.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:55.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.518 --rc genhtml_branch_coverage=1 00:29:55.518 --rc genhtml_function_coverage=1 00:29:55.518 --rc genhtml_legend=1 00:29:55.518 --rc geninfo_all_blocks=1 00:29:55.518 --rc geninfo_unexecuted_blocks=1 00:29:55.518 00:29:55.518 ' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:55.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.518 --rc genhtml_branch_coverage=1 00:29:55.518 --rc genhtml_function_coverage=1 00:29:55.518 --rc genhtml_legend=1 00:29:55.518 --rc geninfo_all_blocks=1 00:29:55.518 --rc geninfo_unexecuted_blocks=1 00:29:55.518 00:29:55.518 ' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:55.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.518 --rc genhtml_branch_coverage=1 00:29:55.518 --rc genhtml_function_coverage=1 00:29:55.518 --rc genhtml_legend=1 00:29:55.518 --rc geninfo_all_blocks=1 00:29:55.518 --rc geninfo_unexecuted_blocks=1 00:29:55.518 00:29:55.518 ' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:55.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.518 --rc genhtml_branch_coverage=1 00:29:55.518 --rc genhtml_function_coverage=1 00:29:55.518 --rc genhtml_legend=1 00:29:55.518 --rc geninfo_all_blocks=1 00:29:55.518 --rc geninfo_unexecuted_blocks=1 00:29:55.518 00:29:55.518 ' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.518 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.519 12:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:02.093 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:02.094 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:02.094 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:02.094 Found net devices under 0000:86:00.0: cvl_0_0 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:02.094 Found net devices under 0000:86:00.1: cvl_0_1 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:02.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:30:02.094 00:30:02.094 --- 10.0.0.2 ping statistics --- 00:30:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.094 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:02.094 00:30:02.094 --- 10.0.0.1 ping statistics --- 00:30:02.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.094 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:02.094 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=636800 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 636800 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 636800 ']' 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:02.095 [2024-11-20 12:39:44.545024] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:02.095 [2024-11-20 12:39:44.546061] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:02.095 [2024-11-20 12:39:44.546101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.095 [2024-11-20 12:39:44.626501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.095 [2024-11-20 12:39:44.666430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.095 [2024-11-20 12:39:44.666465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.095 [2024-11-20 12:39:44.666473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.095 [2024-11-20 12:39:44.666479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.095 [2024-11-20 12:39:44.666483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.095 [2024-11-20 12:39:44.667011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.095 [2024-11-20 12:39:44.733942] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.095 [2024-11-20 12:39:44.734173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.095 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.095 [2024-11-20 12:39:44.971679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:02.095 ************************************ 00:30:02.095 START TEST lvs_grow_clean 00:30:02.095 ************************************ 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:02.095 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:02.354 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:02.354 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:02.613 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:02.613 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:02.613 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:02.613 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:02.613 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:02.613 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 lvol 150 00:30:02.872 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=469877a4-7acc-470c-8cd7-49c0edbebd80 00:30:02.872 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:02.872 12:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:03.131 [2024-11-20 12:39:46.059397] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:03.131 [2024-11-20 12:39:46.059540] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:03.131 true 00:30:03.131 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:03.131 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:03.390 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:03.390 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:03.390 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 469877a4-7acc-470c-8cd7-49c0edbebd80 00:30:03.648 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:03.908 [2024-11-20 12:39:46.795877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.908 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=637292 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 637292 /var/tmp/bdevperf.sock 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 637292 ']' 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:03.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.908 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:04.167 [2024-11-20 12:39:47.061219] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:04.167 [2024-11-20 12:39:47.061269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637292 ] 00:30:04.167 [2024-11-20 12:39:47.135940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.167 [2024-11-20 12:39:47.179101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.167 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.167 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:04.167 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:04.735 Nvme0n1 00:30:04.735 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:04.735 [ 00:30:04.735 { 00:30:04.735 "name": "Nvme0n1", 00:30:04.735 "aliases": [ 00:30:04.735 "469877a4-7acc-470c-8cd7-49c0edbebd80" 00:30:04.735 ], 00:30:04.735 "product_name": "NVMe disk", 00:30:04.735 "block_size": 4096, 00:30:04.735 "num_blocks": 38912, 00:30:04.735 "uuid": "469877a4-7acc-470c-8cd7-49c0edbebd80", 00:30:04.735 "numa_id": 1, 00:30:04.735 "assigned_rate_limits": { 00:30:04.735 "rw_ios_per_sec": 0, 00:30:04.735 "rw_mbytes_per_sec": 0, 00:30:04.735 "r_mbytes_per_sec": 0, 00:30:04.735 "w_mbytes_per_sec": 0 00:30:04.735 }, 00:30:04.735 "claimed": false, 00:30:04.735 "zoned": false, 00:30:04.735 "supported_io_types": { 00:30:04.735 "read": true, 00:30:04.735 "write": true, 00:30:04.735 "unmap": true, 00:30:04.735 "flush": true, 00:30:04.735 "reset": true, 00:30:04.735 "nvme_admin": true, 00:30:04.735 "nvme_io": true, 00:30:04.735 "nvme_io_md": false, 00:30:04.735 "write_zeroes": true, 00:30:04.735 "zcopy": false, 00:30:04.735 "get_zone_info": false, 00:30:04.735 "zone_management": false, 00:30:04.735 "zone_append": false, 00:30:04.735 "compare": true, 00:30:04.735 "compare_and_write": true, 00:30:04.735 "abort": true, 00:30:04.735 "seek_hole": false, 00:30:04.735 "seek_data": false, 00:30:04.735 "copy": true, 00:30:04.735 "nvme_iov_md": false 00:30:04.735 }, 00:30:04.735 "memory_domains": [ 00:30:04.735 { 00:30:04.735 "dma_device_id": "system", 00:30:04.735 "dma_device_type": 1 00:30:04.735 } 00:30:04.735 ], 00:30:04.735 "driver_specific": { 00:30:04.735 "nvme": [ 00:30:04.735 { 00:30:04.735 "trid": { 00:30:04.735 "trtype": "TCP", 00:30:04.735 "adrfam": "IPv4", 00:30:04.735 "traddr": "10.0.0.2", 00:30:04.736 "trsvcid": "4420", 00:30:04.736 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:04.736 }, 00:30:04.736 "ctrlr_data": { 00:30:04.736 "cntlid": 1, 00:30:04.736 "vendor_id": "0x8086", 00:30:04.736 "model_number": "SPDK bdev Controller", 00:30:04.736 "serial_number": "SPDK0", 00:30:04.736 "firmware_revision": "25.01", 00:30:04.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.736 "oacs": { 00:30:04.736 "security": 0, 00:30:04.736 "format": 0, 00:30:04.736 "firmware": 0, 00:30:04.736 "ns_manage": 0 00:30:04.736 }, 00:30:04.736 "multi_ctrlr": true, 00:30:04.736 "ana_reporting": false 00:30:04.736 }, 00:30:04.736 "vs": { 00:30:04.736 "nvme_version": "1.3" 00:30:04.736 }, 00:30:04.736 "ns_data": { 00:30:04.736 "id": 1, 00:30:04.736 "can_share": true 00:30:04.736 } 00:30:04.736 } 00:30:04.736 ], 00:30:04.736 "mp_policy": "active_passive" 00:30:04.736 } 00:30:04.736 } 00:30:04.736 ] 00:30:04.736 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=637310 00:30:04.736 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:04.736 12:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:04.994 Running I/O for 10 seconds... 00:30:05.931 Latency(us) 00:30:05.931 [2024-11-20T11:39:49.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.931 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:05.931 [2024-11-20T11:39:49.047Z] =================================================================================================================== 00:30:05.931 [2024-11-20T11:39:49.047Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:05.931 00:30:06.868 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:06.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.868 Nvme0n1 : 2.00 22559.50 88.12 0.00 0.00 0.00 0.00 0.00 00:30:06.868 [2024-11-20T11:39:49.984Z] =================================================================================================================== 00:30:06.868 [2024-11-20T11:39:49.984Z] Total : 22559.50 88.12 0.00 0.00 0.00 0.00 0.00 00:30:06.868 00:30:06.868 true 00:30:07.127 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:07.127 12:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:07.127 12:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:07.127 12:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:07.127 12:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 637310 00:30:08.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.063 Nvme0n1 : 3.00 22575.00 88.18 0.00 0.00 0.00 0.00 0.00 00:30:08.063 [2024-11-20T11:39:51.179Z] =================================================================================================================== 00:30:08.063 [2024-11-20T11:39:51.179Z] Total : 22575.00 88.18 0.00 0.00 0.00 0.00 0.00 00:30:08.063 00:30:09.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.000 Nvme0n1 : 4.00 22709.75 88.71 0.00 0.00 0.00 0.00 0.00 00:30:09.000 [2024-11-20T11:39:52.116Z] =================================================================================================================== 00:30:09.000 [2024-11-20T11:39:52.116Z] Total : 22709.75 88.71 0.00 0.00 0.00 0.00 0.00 00:30:09.000 00:30:09.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.938 Nvme0n1 : 5.00 22790.60 89.03 0.00 0.00 0.00 0.00 0.00 00:30:09.938 [2024-11-20T11:39:53.054Z] =================================================================================================================== 00:30:09.938 [2024-11-20T11:39:53.054Z] Total : 22790.60 89.03 0.00 0.00 0.00 0.00 0.00 00:30:09.938 00:30:10.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.876 Nvme0n1 : 6.00 22844.50 89.24 0.00 0.00 0.00 0.00 0.00 00:30:10.876 [2024-11-20T11:39:53.992Z] =================================================================================================================== 00:30:10.876 [2024-11-20T11:39:53.992Z] Total : 22844.50 89.24 0.00 0.00 0.00 0.00 0.00 00:30:10.876 00:30:11.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.815 Nvme0n1 : 7.00 22883.00 89.39 0.00 0.00 0.00 0.00 0.00 00:30:11.815 [2024-11-20T11:39:54.931Z] =================================================================================================================== 00:30:11.815 [2024-11-20T11:39:54.931Z] Total : 22883.00 89.39 0.00 0.00 0.00 0.00 0.00 00:30:11.815 00:30:13.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.193 Nvme0n1 : 8.00 22911.88 89.50 0.00 0.00 0.00 0.00 0.00 00:30:13.193 [2024-11-20T11:39:56.309Z] =================================================================================================================== 00:30:13.193 [2024-11-20T11:39:56.309Z] Total : 22911.88 89.50 0.00 0.00 0.00 0.00 0.00 00:30:13.193 00:30:14.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.131 Nvme0n1 : 9.00 22934.33 89.59 0.00 0.00 0.00 0.00 0.00 00:30:14.131 [2024-11-20T11:39:57.247Z] =================================================================================================================== 00:30:14.131 [2024-11-20T11:39:57.247Z] Total : 22934.33 89.59 0.00 0.00 0.00 0.00 0.00 00:30:14.131 00:30:15.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.070 Nvme0n1 : 10.00 22965.00 89.71 0.00 0.00 0.00 0.00 0.00 00:30:15.070 [2024-11-20T11:39:58.186Z] =================================================================================================================== 00:30:15.070 [2024-11-20T11:39:58.186Z] Total : 22965.00 89.71 0.00 0.00 0.00 0.00 0.00 00:30:15.070 00:30:15.070 00:30:15.070 Latency(us) 00:30:15.070 [2024-11-20T11:39:58.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.070 Nvme0n1 : 10.00 22967.48 89.72 0.00 0.00 5570.18 3362.28 26556.33 00:30:15.070 [2024-11-20T11:39:58.186Z] =================================================================================================================== 00:30:15.070 [2024-11-20T11:39:58.186Z] Total : 22967.48 89.72 0.00 0.00 5570.18 3362.28 26556.33 00:30:15.070 { 00:30:15.070 "results": [ 00:30:15.070 { 00:30:15.070 "job": "Nvme0n1", 00:30:15.070 "core_mask": "0x2", 00:30:15.070 "workload": "randwrite", 00:30:15.070 "status": "finished", 00:30:15.070 "queue_depth": 128, 00:30:15.070 "io_size": 4096, 00:30:15.070 "runtime": 10.004492, 00:30:15.070 "iops": 22967.48300663342, 00:30:15.070 "mibps": 89.7167304946618, 00:30:15.070 "io_failed": 0, 00:30:15.070 "io_timeout": 0, 00:30:15.070 "avg_latency_us": 5570.18131584853, 00:30:15.070 "min_latency_us": 3362.2817391304347, 00:30:15.070 "max_latency_us": 26556.326956521738 00:30:15.070 } 00:30:15.070 ], 00:30:15.070 "core_count": 1 00:30:15.070 } 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 637292 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 637292 ']' 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 637292 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 637292 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 637292' 00:30:15.070 killing process with pid 637292 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 637292 00:30:15.070 Received shutdown signal, test time was about 10.000000 seconds 00:30:15.070 00:30:15.070 Latency(us) 00:30:15.070 [2024-11-20T11:39:58.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.070 [2024-11-20T11:39:58.186Z] =================================================================================================================== 00:30:15.070 [2024-11-20T11:39:58.186Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.070 12:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 637292 00:30:15.070 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.329 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.588 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:15.588 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:15.848 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:15.848 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:15.848 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:15.848 [2024-11-20 12:39:58.919495] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:16.108 12:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:16.108 request: 00:30:16.108 { 00:30:16.108 "uuid": "6e0fa8c7-6389-4aaf-b806-33ee3f720143", 00:30:16.108 "method": "bdev_lvol_get_lvstores", 00:30:16.108 "req_id": 1 00:30:16.108 } 00:30:16.108 Got JSON-RPC error response 00:30:16.108 response: 00:30:16.108 { 00:30:16.108 "code": -19, 00:30:16.108 "message": "No such device" 00:30:16.108 } 00:30:16.108 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:16.108 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.108 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.108 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.108 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:16.367 aio_bdev 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 469877a4-7acc-470c-8cd7-49c0edbebd80 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=469877a4-7acc-470c-8cd7-49c0edbebd80 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:16.367 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:16.625 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 469877a4-7acc-470c-8cd7-49c0edbebd80 -t 2000 00:30:16.885 [ 00:30:16.885 { 00:30:16.885 "name": "469877a4-7acc-470c-8cd7-49c0edbebd80", 00:30:16.885 "aliases": [ 00:30:16.885 "lvs/lvol" 00:30:16.885 ], 00:30:16.885 "product_name": "Logical Volume", 00:30:16.885 "block_size": 4096, 00:30:16.885 "num_blocks": 38912, 00:30:16.885 "uuid": "469877a4-7acc-470c-8cd7-49c0edbebd80", 00:30:16.885 "assigned_rate_limits": { 00:30:16.885 "rw_ios_per_sec": 0, 00:30:16.885 "rw_mbytes_per_sec": 0, 00:30:16.885 "r_mbytes_per_sec": 0, 00:30:16.885 "w_mbytes_per_sec": 0 00:30:16.885 }, 00:30:16.885 "claimed": false, 00:30:16.885 "zoned": false, 00:30:16.885 "supported_io_types": { 00:30:16.885 "read": true, 00:30:16.885 "write": true, 00:30:16.885 "unmap": true, 00:30:16.885 "flush": false, 00:30:16.885 "reset": true, 00:30:16.885 "nvme_admin": false, 00:30:16.885 "nvme_io": false, 00:30:16.885 "nvme_io_md": false, 00:30:16.885 "write_zeroes": true, 00:30:16.885 "zcopy": false, 00:30:16.885 "get_zone_info": false, 00:30:16.885 "zone_management": false, 00:30:16.885 "zone_append": false, 00:30:16.885 "compare": false, 00:30:16.885 "compare_and_write": false, 00:30:16.885 "abort": false, 00:30:16.885 "seek_hole": true, 00:30:16.885 "seek_data": true, 00:30:16.885 "copy": false, 00:30:16.885 "nvme_iov_md": false 00:30:16.885 }, 00:30:16.885 "driver_specific": { 00:30:16.885 "lvol": { 00:30:16.885 "lvol_store_uuid": "6e0fa8c7-6389-4aaf-b806-33ee3f720143", 00:30:16.885 "base_bdev": "aio_bdev", 00:30:16.885 "thin_provision": false, 00:30:16.885 "num_allocated_clusters": 38, 00:30:16.885 "snapshot": false, 00:30:16.885 "clone": false, 00:30:16.885 "esnap_clone": false 00:30:16.885 } 00:30:16.885 } 00:30:16.885 } 00:30:16.885 ] 00:30:16.885 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:16.885 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:16.885 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:16.885 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:16.885 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:16.885 12:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:17.144 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:17.144 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 469877a4-7acc-470c-8cd7-49c0edbebd80 00:30:17.403 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e0fa8c7-6389-4aaf-b806-33ee3f720143 00:30:17.661 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:17.920 00:30:17.920 real 0m15.776s 00:30:17.920 user 0m15.278s 00:30:17.920 sys 0m1.522s 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:17.920 ************************************ 00:30:17.920 END TEST lvs_grow_clean 00:30:17.920 ************************************ 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:17.920 ************************************ 00:30:17.920 START TEST lvs_grow_dirty 00:30:17.920 ************************************ 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:17.920 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:17.921 12:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:18.180 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:18.180 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:18.439 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=affda5f8-877b-4120-9b22-492aa866f0fb 00:30:18.439 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:18.439 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:18.439 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:18.439 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:18.439 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u affda5f8-877b-4120-9b22-492aa866f0fb lvol 150 00:30:18.697 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:18.697 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:18.697 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:18.957 [2024-11-20 12:40:01.867394] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:18.957 [2024-11-20 12:40:01.867528] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:18.957 true 00:30:18.957 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:18.957 12:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:19.215 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:19.215 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:19.215 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:19.474 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.732 [2024-11-20 12:40:02.623829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=639880 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 639880 /var/tmp/bdevperf.sock 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 639880 ']' 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:19.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.732 12:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:19.991 [2024-11-20 12:40:02.879083] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:19.991 [2024-11-20 12:40:02.879130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639880 ] 00:30:19.991 [2024-11-20 12:40:02.953189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.992 [2024-11-20 12:40:02.995125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.992 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.992 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:19.992 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:20.250 Nvme0n1 00:30:20.509 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:20.509 [ 00:30:20.509 { 00:30:20.509 "name": "Nvme0n1", 00:30:20.509 "aliases": [ 00:30:20.509 "35f8e742-4c72-4833-9c37-561bceb3d5dd" 00:30:20.509 ], 00:30:20.509 "product_name": "NVMe disk", 00:30:20.509 "block_size": 4096, 00:30:20.509 "num_blocks": 38912, 00:30:20.509 "uuid": "35f8e742-4c72-4833-9c37-561bceb3d5dd", 00:30:20.509 "numa_id": 1, 00:30:20.509 "assigned_rate_limits": { 00:30:20.509 "rw_ios_per_sec": 0, 00:30:20.509 "rw_mbytes_per_sec": 0, 00:30:20.509 "r_mbytes_per_sec": 0, 00:30:20.509 "w_mbytes_per_sec": 0 00:30:20.509 }, 00:30:20.509 "claimed": false, 00:30:20.509 "zoned": false, 00:30:20.509 "supported_io_types": { 00:30:20.509 "read": true, 00:30:20.509 "write": true, 00:30:20.509 "unmap": true, 00:30:20.509 "flush": true, 00:30:20.509 "reset": true, 00:30:20.509 "nvme_admin": true, 00:30:20.509 "nvme_io": true, 00:30:20.509 "nvme_io_md": false, 00:30:20.509 "write_zeroes": true, 00:30:20.509 "zcopy": false, 00:30:20.509 "get_zone_info": false, 00:30:20.509 "zone_management": false, 00:30:20.509 "zone_append": false, 00:30:20.509 "compare": true, 00:30:20.509 "compare_and_write": true, 00:30:20.509 "abort": true, 00:30:20.509 "seek_hole": false, 00:30:20.509 "seek_data": false, 00:30:20.509 "copy": true, 00:30:20.509 "nvme_iov_md": false 00:30:20.509 }, 00:30:20.509 "memory_domains": [ 00:30:20.509 { 00:30:20.509 "dma_device_id": "system", 00:30:20.509 "dma_device_type": 1 00:30:20.509 } 00:30:20.509 ], 00:30:20.509 "driver_specific": { 00:30:20.509 "nvme": [ 00:30:20.509 { 00:30:20.509 "trid": { 00:30:20.509 "trtype": "TCP", 00:30:20.509 "adrfam": "IPv4", 00:30:20.509 "traddr": "10.0.0.2", 00:30:20.509 "trsvcid": "4420", 00:30:20.510 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:20.510 }, 00:30:20.510 "ctrlr_data": { 00:30:20.510 "cntlid": 1, 00:30:20.510 "vendor_id": "0x8086", 00:30:20.510 "model_number": "SPDK bdev Controller", 00:30:20.510 "serial_number": "SPDK0", 00:30:20.510 "firmware_revision": "25.01", 00:30:20.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.510 "oacs": { 00:30:20.510 "security": 0, 00:30:20.510 "format": 0, 00:30:20.510 "firmware": 0, 00:30:20.510 "ns_manage": 0 00:30:20.510 }, 00:30:20.510 "multi_ctrlr": true, 00:30:20.510 "ana_reporting": false 00:30:20.510 }, 00:30:20.510 "vs": { 00:30:20.510 "nvme_version": "1.3" 00:30:20.510 }, 00:30:20.510 "ns_data": { 00:30:20.510 "id": 1, 00:30:20.510 "can_share": true 00:30:20.510 } 00:30:20.510 } 00:30:20.510 ], 00:30:20.510 "mp_policy": "active_passive" 00:30:20.510 } 00:30:20.510 } 00:30:20.510 ] 00:30:20.510 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:20.510 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=639890 00:30:20.510 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:20.769 Running I/O for 10 seconds... 00:30:21.706 Latency(us) 00:30:21.706 [2024-11-20T11:40:04.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.706 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:21.706 [2024-11-20T11:40:04.822Z] =================================================================================================================== 00:30:21.706 [2024-11-20T11:40:04.822Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:21.706 00:30:22.643 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:22.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.643 Nvme0n1 : 2.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:22.643 [2024-11-20T11:40:05.759Z] =================================================================================================================== 00:30:22.643 [2024-11-20T11:40:05.759Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:22.643 00:30:22.643 true 00:30:22.902 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:22.902 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:22.902 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:22.902 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:22.902 12:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 639890 00:30:23.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.838 Nvme0n1 : 3.00 22754.33 88.88 0.00 0.00 0.00 0.00 0.00 00:30:23.838 [2024-11-20T11:40:06.954Z] =================================================================================================================== 00:30:23.838 [2024-11-20T11:40:06.954Z] Total : 22754.33 88.88 0.00 0.00 0.00 0.00 0.00 00:30:23.838 00:30:24.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.817 Nvme0n1 : 4.00 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:30:24.817 [2024-11-20T11:40:07.933Z] =================================================================================================================== 00:30:24.817 [2024-11-20T11:40:07.933Z] Total : 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:30:24.817 00:30:25.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.791 Nvme0n1 : 5.00 22882.60 89.39 0.00 0.00 0.00 0.00 0.00 00:30:25.791 [2024-11-20T11:40:08.907Z] =================================================================================================================== 00:30:25.791 [2024-11-20T11:40:08.907Z] Total : 22882.60 89.39 0.00 0.00 0.00 0.00 0.00 00:30:25.791 00:30:26.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.727 Nvme0n1 : 6.00 22939.83 89.61 0.00 0.00 0.00 0.00 0.00 00:30:26.727 [2024-11-20T11:40:09.843Z] =================================================================================================================== 00:30:26.727 [2024-11-20T11:40:09.843Z] Total : 22939.83 89.61 0.00 0.00 0.00 0.00 0.00 00:30:26.727 00:30:27.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.665 Nvme0n1 : 7.00 22901.29 89.46 0.00 0.00 0.00 0.00 0.00 00:30:27.665 [2024-11-20T11:40:10.781Z] =================================================================================================================== 00:30:27.665 [2024-11-20T11:40:10.781Z] Total : 22901.29 89.46 0.00 0.00 0.00 0.00 0.00 00:30:27.665 00:30:28.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.605 Nvme0n1 : 8.00 22943.75 89.62 0.00 0.00 0.00 0.00 0.00 00:30:28.605 [2024-11-20T11:40:11.721Z] =================================================================================================================== 00:30:28.606 [2024-11-20T11:40:11.722Z] Total : 22943.75 89.62 0.00 0.00 0.00 0.00 0.00 00:30:28.606 00:30:29.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.984 Nvme0n1 : 9.00 22966.44 89.71 0.00 0.00 0.00 0.00 0.00 00:30:29.984 [2024-11-20T11:40:13.100Z] =================================================================================================================== 00:30:29.984 [2024-11-20T11:40:13.100Z] Total : 22966.44 89.71 0.00 0.00 0.00 0.00 0.00 00:30:29.984 00:30:30.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.922 Nvme0n1 : 10.00 22993.90 89.82 0.00 0.00 0.00 0.00 0.00 00:30:30.922 [2024-11-20T11:40:14.038Z] =================================================================================================================== 00:30:30.922 [2024-11-20T11:40:14.038Z] Total : 22993.90 89.82 0.00 0.00 0.00 0.00 0.00 00:30:30.922 00:30:30.922 00:30:30.922 Latency(us) 00:30:30.922 [2024-11-20T11:40:14.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.922 Nvme0n1 : 10.01 22992.53 89.81 0.00 0.00 5564.08 3177.07 25302.59 00:30:30.922 [2024-11-20T11:40:14.038Z] =================================================================================================================== 00:30:30.922 [2024-11-20T11:40:14.038Z] Total : 22992.53 89.81 0.00 0.00 5564.08 3177.07 25302.59 00:30:30.922 { 00:30:30.922 "results": [ 00:30:30.922 { 00:30:30.922 "job": "Nvme0n1", 00:30:30.922 "core_mask": "0x2", 00:30:30.922 "workload": "randwrite", 00:30:30.922 "status": "finished", 00:30:30.922 "queue_depth": 128, 00:30:30.922 "io_size": 4096, 00:30:30.922 "runtime": 10.006162, 00:30:30.922 "iops": 22992.532001780502, 00:30:30.922 "mibps": 89.81457813195509, 00:30:30.922 "io_failed": 0, 00:30:30.922 "io_timeout": 0, 00:30:30.922 "avg_latency_us": 5564.075280694225, 00:30:30.922 "min_latency_us": 3177.0713043478263, 00:30:30.922 "max_latency_us": 25302.594782608696 00:30:30.922 } 00:30:30.922 ], 00:30:30.922 "core_count": 1 00:30:30.922 } 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 639880 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 639880 ']' 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 639880 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639880 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639880' 00:30:30.922 killing process with pid 639880 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 639880 00:30:30.922 Received shutdown signal, test time was about 10.000000 seconds 00:30:30.922 00:30:30.922 Latency(us) 00:30:30.922 [2024-11-20T11:40:14.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.922 [2024-11-20T11:40:14.038Z] =================================================================================================================== 00:30:30.922 [2024-11-20T11:40:14.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 639880 00:30:30.922 12:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:31.182 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 636800 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 636800 00:30:31.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 636800 Killed "${NVMF_APP[@]}" "$@" 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=641727 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 641727 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 641727 ']' 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.442 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:31.701 [2024-11-20 12:40:14.592816] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:31.701 [2024-11-20 12:40:14.593726] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:31.701 [2024-11-20 12:40:14.593760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.701 [2024-11-20 12:40:14.674848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.701 [2024-11-20 12:40:14.715348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.701 [2024-11-20 12:40:14.715387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.701 [2024-11-20 12:40:14.715394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.701 [2024-11-20 12:40:14.715401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.701 [2024-11-20 12:40:14.715405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.701 [2024-11-20 12:40:14.715965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.701 [2024-11-20 12:40:14.782258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:31.701 [2024-11-20 12:40:14.782477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:31.701 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.701 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:31.701 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.701 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.701 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:31.961 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.961 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:31.961 [2024-11-20 12:40:15.021457] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:31.961 [2024-11-20 12:40:15.021652] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:31.961 [2024-11-20 12:40:15.021734] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:31.961 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:32.221 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 35f8e742-4c72-4833-9c37-561bceb3d5dd -t 2000 00:30:32.481 [ 00:30:32.481 { 00:30:32.481 "name": "35f8e742-4c72-4833-9c37-561bceb3d5dd", 00:30:32.481 "aliases": [ 00:30:32.481 "lvs/lvol" 00:30:32.481 ], 00:30:32.481 "product_name": "Logical Volume", 00:30:32.481 "block_size": 4096, 00:30:32.481 "num_blocks": 38912, 00:30:32.481 "uuid": "35f8e742-4c72-4833-9c37-561bceb3d5dd", 00:30:32.481 "assigned_rate_limits": { 00:30:32.481 "rw_ios_per_sec": 0, 00:30:32.481 "rw_mbytes_per_sec": 0, 00:30:32.481 "r_mbytes_per_sec": 0, 00:30:32.481 "w_mbytes_per_sec": 0 00:30:32.481 }, 00:30:32.481 "claimed": false, 00:30:32.481 "zoned": false, 00:30:32.481 "supported_io_types": { 00:30:32.481 "read": true, 00:30:32.481 "write": true, 00:30:32.481 "unmap": true, 00:30:32.481 "flush": false, 00:30:32.481 "reset": true, 00:30:32.481 "nvme_admin": false, 00:30:32.481 "nvme_io": false, 00:30:32.481 "nvme_io_md": false, 00:30:32.481 "write_zeroes": true, 00:30:32.481 "zcopy": false, 00:30:32.481 "get_zone_info": false, 00:30:32.481 "zone_management": false, 00:30:32.481 "zone_append": false, 00:30:32.481 "compare": false, 00:30:32.481 "compare_and_write": false, 00:30:32.481 "abort": false, 00:30:32.481 "seek_hole": true, 00:30:32.481 "seek_data": true, 00:30:32.481 "copy": false, 00:30:32.481 "nvme_iov_md": false 00:30:32.481 }, 00:30:32.481 "driver_specific": { 00:30:32.481 "lvol": { 00:30:32.481 "lvol_store_uuid": "affda5f8-877b-4120-9b22-492aa866f0fb", 00:30:32.481 "base_bdev": "aio_bdev", 00:30:32.481 "thin_provision": false, 00:30:32.481 "num_allocated_clusters": 38, 00:30:32.481 "snapshot": false, 00:30:32.481 "clone": false, 00:30:32.481 "esnap_clone": false 00:30:32.481 } 00:30:32.481 } 00:30:32.481 } 00:30:32.481 ] 00:30:32.481 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:32.481 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:32.481 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:32.740 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:32.740 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:32.740 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:33.000 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:33.000 12:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:33.000 [2024-11-20 12:40:16.036424] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:33.000 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:33.259 request: 00:30:33.259 { 00:30:33.259 "uuid": "affda5f8-877b-4120-9b22-492aa866f0fb", 00:30:33.259 "method": "bdev_lvol_get_lvstores", 00:30:33.259 "req_id": 1 00:30:33.259 } 00:30:33.259 Got JSON-RPC error response 00:30:33.259 response: 00:30:33.259 { 00:30:33.259 "code": -19, 00:30:33.259 "message": "No such device" 00:30:33.259 } 00:30:33.259 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:33.259 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:33.259 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:33.259 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:33.259 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:33.519 aio_bdev 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:33.519 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:33.778 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 35f8e742-4c72-4833-9c37-561bceb3d5dd -t 2000 00:30:33.778 [ 00:30:33.778 { 00:30:33.778 "name": "35f8e742-4c72-4833-9c37-561bceb3d5dd", 00:30:33.778 "aliases": [ 00:30:33.778 "lvs/lvol" 00:30:33.778 ], 00:30:33.778 "product_name": "Logical Volume", 00:30:33.778 "block_size": 4096, 00:30:33.778 "num_blocks": 38912, 00:30:33.778 "uuid": "35f8e742-4c72-4833-9c37-561bceb3d5dd", 00:30:33.778 "assigned_rate_limits": { 00:30:33.778 "rw_ios_per_sec": 0, 00:30:33.778 "rw_mbytes_per_sec": 0, 00:30:33.778 "r_mbytes_per_sec": 0, 00:30:33.778 "w_mbytes_per_sec": 0 00:30:33.778 }, 00:30:33.778 "claimed": false, 00:30:33.778 "zoned": false, 00:30:33.778 "supported_io_types": { 00:30:33.778 "read": true, 00:30:33.778 "write": true, 00:30:33.778 "unmap": true, 00:30:33.778 "flush": false, 00:30:33.778 "reset": true, 00:30:33.778 "nvme_admin": false, 00:30:33.778 "nvme_io": false, 00:30:33.778 "nvme_io_md": false, 00:30:33.778 "write_zeroes": true, 00:30:33.778 "zcopy": false, 00:30:33.778 "get_zone_info": false, 00:30:33.778 "zone_management": false, 00:30:33.778 "zone_append": false, 00:30:33.778 "compare": false, 00:30:33.778 "compare_and_write": false, 00:30:33.778 "abort": false, 00:30:33.778 "seek_hole": true, 00:30:33.778 "seek_data": true, 00:30:33.778 "copy": false, 00:30:33.778 "nvme_iov_md": false 00:30:33.778 }, 00:30:33.778 "driver_specific": { 00:30:33.778 "lvol": { 00:30:33.778 "lvol_store_uuid": "affda5f8-877b-4120-9b22-492aa866f0fb", 00:30:33.778 "base_bdev": "aio_bdev", 00:30:33.778 "thin_provision": false, 00:30:33.778 "num_allocated_clusters": 38, 00:30:33.778 "snapshot": false, 00:30:33.778 "clone": false, 00:30:33.778 "esnap_clone": false 00:30:33.778 } 00:30:33.778 } 00:30:33.778 } 00:30:33.778 ] 00:30:33.778 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:33.778 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:33.778 12:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:34.037 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:34.037 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:34.037 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:34.296 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:34.296 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 35f8e742-4c72-4833-9c37-561bceb3d5dd 00:30:34.555 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u affda5f8-877b-4120-9b22-492aa866f0fb 00:30:34.555 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:34.814 00:30:34.814 real 0m16.938s 00:30:34.814 user 0m34.397s 00:30:34.814 sys 0m3.818s 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:34.814 ************************************ 00:30:34.814 END TEST lvs_grow_dirty 00:30:34.814 ************************************ 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:34.814 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:34.815 nvmf_trace.0 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.815 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.815 rmmod nvme_tcp 00:30:35.075 rmmod nvme_fabrics 00:30:35.075 rmmod nvme_keyring 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 641727 ']' 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 641727 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 641727 ']' 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 641727 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.075 12:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 641727 00:30:35.075 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.075 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.075 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 641727' 00:30:35.075 killing process with pid 641727 00:30:35.075 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 641727 00:30:35.075 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 641727 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.334 12:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.253 00:30:37.253 real 0m41.909s 00:30:37.253 user 0m52.265s 00:30:37.253 sys 0m10.168s 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:37.253 ************************************ 00:30:37.253 END TEST nvmf_lvs_grow 00:30:37.253 ************************************ 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:37.253 ************************************ 00:30:37.253 START TEST nvmf_bdev_io_wait 00:30:37.253 ************************************ 00:30:37.253 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:37.513 * Looking for test storage... 00:30:37.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:37.513 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.514 --rc genhtml_branch_coverage=1 00:30:37.514 --rc genhtml_function_coverage=1 00:30:37.514 --rc genhtml_legend=1 00:30:37.514 --rc geninfo_all_blocks=1 00:30:37.514 --rc geninfo_unexecuted_blocks=1 00:30:37.514 00:30:37.514 ' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.514 --rc genhtml_branch_coverage=1 00:30:37.514 --rc genhtml_function_coverage=1 00:30:37.514 --rc genhtml_legend=1 00:30:37.514 --rc geninfo_all_blocks=1 00:30:37.514 --rc geninfo_unexecuted_blocks=1 00:30:37.514 00:30:37.514 ' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.514 --rc genhtml_branch_coverage=1 00:30:37.514 --rc genhtml_function_coverage=1 00:30:37.514 --rc genhtml_legend=1 00:30:37.514 --rc geninfo_all_blocks=1 00:30:37.514 --rc geninfo_unexecuted_blocks=1 00:30:37.514 00:30:37.514 ' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.514 --rc genhtml_branch_coverage=1 00:30:37.514 --rc genhtml_function_coverage=1 00:30:37.514 --rc genhtml_legend=1 00:30:37.514 --rc geninfo_all_blocks=1 00:30:37.514 --rc geninfo_unexecuted_blocks=1 00:30:37.514 00:30:37.514 ' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.514 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:44.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:44.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:44.086 Found net devices under 0000:86:00.0: cvl_0_0 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:44.086 Found net devices under 0000:86:00.1: cvl_0_1 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.086 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:30:44.087 00:30:44.087 --- 10.0.0.2 ping statistics --- 00:30:44.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.087 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:30:44.087 00:30:44.087 --- 10.0.0.1 ping statistics --- 00:30:44.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.087 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=645773 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 645773 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 645773 ']' 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 [2024-11-20 12:40:26.495667] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.087 [2024-11-20 12:40:26.496560] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:44.087 [2024-11-20 12:40:26.496590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.087 [2024-11-20 12:40:26.563769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.087 [2024-11-20 12:40:26.608706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.087 [2024-11-20 12:40:26.608745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.087 [2024-11-20 12:40:26.608752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.087 [2024-11-20 12:40:26.608758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.087 [2024-11-20 12:40:26.608763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.087 [2024-11-20 12:40:26.611966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.087 [2024-11-20 12:40:26.611998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.087 [2024-11-20 12:40:26.612029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.087 [2024-11-20 12:40:26.612030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.087 [2024-11-20 12:40:26.612434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 [2024-11-20 12:40:26.762108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.087 [2024-11-20 12:40:26.762651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:44.087 [2024-11-20 12:40:26.762678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:44.087 [2024-11-20 12:40:26.762839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 [2024-11-20 12:40:26.772866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 Malloc0 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.087 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.088 [2024-11-20 12:40:26.836785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=645811 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=645813 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.088 { 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme$subsystem", 00:30:44.088 "trtype": "$TEST_TRANSPORT", 00:30:44.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "$NVMF_PORT", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.088 "hdgst": ${hdgst:-false}, 00:30:44.088 "ddgst": ${ddgst:-false} 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 } 00:30:44.088 EOF 00:30:44.088 )") 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=645815 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.088 { 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme$subsystem", 00:30:44.088 "trtype": "$TEST_TRANSPORT", 00:30:44.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "$NVMF_PORT", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.088 "hdgst": ${hdgst:-false}, 00:30:44.088 "ddgst": ${ddgst:-false} 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 } 00:30:44.088 EOF 00:30:44.088 )") 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=645818 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.088 { 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme$subsystem", 00:30:44.088 "trtype": "$TEST_TRANSPORT", 00:30:44.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "$NVMF_PORT", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.088 "hdgst": ${hdgst:-false}, 00:30:44.088 "ddgst": ${ddgst:-false} 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 } 00:30:44.088 EOF 00:30:44.088 )") 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.088 { 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme$subsystem", 00:30:44.088 "trtype": "$TEST_TRANSPORT", 00:30:44.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "$NVMF_PORT", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.088 "hdgst": ${hdgst:-false}, 00:30:44.088 "ddgst": ${ddgst:-false} 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 } 00:30:44.088 EOF 00:30:44.088 )") 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 645811 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme1", 00:30:44.088 "trtype": "tcp", 00:30:44.088 "traddr": "10.0.0.2", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "4420", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.088 "hdgst": false, 00:30:44.088 "ddgst": false 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 }' 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme1", 00:30:44.088 "trtype": "tcp", 00:30:44.088 "traddr": "10.0.0.2", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "4420", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.088 "hdgst": false, 00:30:44.088 "ddgst": false 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 }' 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme1", 00:30:44.088 "trtype": "tcp", 00:30:44.088 "traddr": "10.0.0.2", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "4420", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.088 "hdgst": false, 00:30:44.088 "ddgst": false 00:30:44.088 }, 00:30:44.088 "method": "bdev_nvme_attach_controller" 00:30:44.088 }' 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:44.088 12:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.088 "params": { 00:30:44.088 "name": "Nvme1", 00:30:44.088 "trtype": "tcp", 00:30:44.088 "traddr": "10.0.0.2", 00:30:44.088 "adrfam": "ipv4", 00:30:44.088 "trsvcid": "4420", 00:30:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.088 "hdgst": false, 00:30:44.088 "ddgst": false 00:30:44.088 }, 00:30:44.089 "method": "bdev_nvme_attach_controller" 00:30:44.089 }' 00:30:44.089 [2024-11-20 12:40:26.890123] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:44.089 [2024-11-20 12:40:26.890169] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:44.089 [2024-11-20 12:40:26.891032] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:44.089 [2024-11-20 12:40:26.891074] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:44.089 [2024-11-20 12:40:26.891084] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:44.089 [2024-11-20 12:40:26.891126] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:44.089 [2024-11-20 12:40:26.893124] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:44.089 [2024-11-20 12:40:26.893165] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:44.089 [2024-11-20 12:40:27.086810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.089 [2024-11-20 12:40:27.129993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:44.089 [2024-11-20 12:40:27.181585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.348 [2024-11-20 12:40:27.224897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:44.348 [2024-11-20 12:40:27.282022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.348 [2024-11-20 12:40:27.325585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.348 [2024-11-20 12:40:27.332014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:44.348 [2024-11-20 12:40:27.368480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:44.348 Running I/O for 1 seconds... 00:30:44.606 Running I/O for 1 seconds... 00:30:44.606 Running I/O for 1 seconds... 00:30:44.606 Running I/O for 1 seconds... 00:30:45.543 8924.00 IOPS, 34.86 MiB/s 00:30:45.543 Latency(us) 00:30:45.543 [2024-11-20T11:40:28.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.543 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:45.543 Nvme1n1 : 1.02 8949.49 34.96 0.00 0.00 14241.32 3789.69 25074.64 00:30:45.543 [2024-11-20T11:40:28.659Z] =================================================================================================================== 00:30:45.543 [2024-11-20T11:40:28.659Z] Total : 8949.49 34.96 0.00 0.00 14241.32 3789.69 25074.64 00:30:45.543 11680.00 IOPS, 45.62 MiB/s 00:30:45.543 Latency(us) 00:30:45.543 [2024-11-20T11:40:28.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.543 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:45.543 Nvme1n1 : 1.01 11725.63 45.80 0.00 0.00 10876.76 4046.14 15158.76 00:30:45.543 [2024-11-20T11:40:28.659Z] =================================================================================================================== 00:30:45.543 [2024-11-20T11:40:28.659Z] Total : 11725.63 45.80 0.00 0.00 10876.76 4046.14 15158.76 00:30:45.543 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 645813 00:30:45.543 9300.00 IOPS, 36.33 MiB/s 00:30:45.543 Latency(us) 00:30:45.543 [2024-11-20T11:40:28.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.543 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:45.543 Nvme1n1 : 1.01 9434.99 36.86 0.00 0.00 13540.04 2763.91 31457.28 00:30:45.543 [2024-11-20T11:40:28.659Z] =================================================================================================================== 00:30:45.543 [2024-11-20T11:40:28.659Z] Total : 9434.99 36.86 0.00 0.00 13540.04 2763.91 31457.28 00:30:45.543 236944.00 IOPS, 925.56 MiB/s [2024-11-20T11:40:28.660Z] 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 645815 00:30:45.544 00:30:45.544 Latency(us) 00:30:45.544 [2024-11-20T11:40:28.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.544 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:45.544 Nvme1n1 : 1.00 236580.80 924.14 0.00 0.00 538.50 233.29 1538.67 00:30:45.544 [2024-11-20T11:40:28.660Z] =================================================================================================================== 00:30:45.544 [2024-11-20T11:40:28.660Z] Total : 236580.80 924.14 0.00 0.00 538.50 233.29 1538.67 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 645818 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.803 rmmod nvme_tcp 00:30:45.803 rmmod nvme_fabrics 00:30:45.803 rmmod nvme_keyring 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 645773 ']' 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 645773 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 645773 ']' 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 645773 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:45.803 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.804 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645773 00:30:45.804 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.804 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.804 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645773' 00:30:45.804 killing process with pid 645773 00:30:45.804 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 645773 00:30:45.804 12:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 645773 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.063 12:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.599 00:30:48.599 real 0m10.743s 00:30:48.599 user 0m15.215s 00:30:48.599 sys 0m6.354s 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:48.599 ************************************ 00:30:48.599 END TEST nvmf_bdev_io_wait 00:30:48.599 ************************************ 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:48.599 ************************************ 00:30:48.599 START TEST nvmf_queue_depth 00:30:48.599 ************************************ 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:48.599 * Looking for test storage... 00:30:48.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.599 --rc genhtml_branch_coverage=1 00:30:48.599 --rc genhtml_function_coverage=1 00:30:48.599 --rc genhtml_legend=1 00:30:48.599 --rc geninfo_all_blocks=1 00:30:48.599 --rc geninfo_unexecuted_blocks=1 00:30:48.599 00:30:48.599 ' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.599 --rc genhtml_branch_coverage=1 00:30:48.599 --rc genhtml_function_coverage=1 00:30:48.599 --rc genhtml_legend=1 00:30:48.599 --rc geninfo_all_blocks=1 00:30:48.599 --rc geninfo_unexecuted_blocks=1 00:30:48.599 00:30:48.599 ' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.599 --rc genhtml_branch_coverage=1 00:30:48.599 --rc genhtml_function_coverage=1 00:30:48.599 --rc genhtml_legend=1 00:30:48.599 --rc geninfo_all_blocks=1 00:30:48.599 --rc geninfo_unexecuted_blocks=1 00:30:48.599 00:30:48.599 ' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:48.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.599 --rc genhtml_branch_coverage=1 00:30:48.599 --rc genhtml_function_coverage=1 00:30:48.599 --rc genhtml_legend=1 00:30:48.599 --rc geninfo_all_blocks=1 00:30:48.599 --rc geninfo_unexecuted_blocks=1 00:30:48.599 00:30:48.599 ' 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:48.599 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.600 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:55.172 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:55.172 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:55.172 Found net devices under 0000:86:00.0: cvl_0_0 00:30:55.172 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:55.173 Found net devices under 0000:86:00.1: cvl_0_1 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:30:55.173 00:30:55.173 --- 10.0.0.2 ping statistics --- 00:30:55.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.173 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:55.173 00:30:55.173 --- 10.0.0.1 ping statistics --- 00:30:55.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.173 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=649732 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 649732 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 649732 ']' 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 [2024-11-20 12:40:37.369489] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:55.173 [2024-11-20 12:40:37.370415] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:55.173 [2024-11-20 12:40:37.370450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.173 [2024-11-20 12:40:37.453276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.173 [2024-11-20 12:40:37.494022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.173 [2024-11-20 12:40:37.494057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.173 [2024-11-20 12:40:37.494064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.173 [2024-11-20 12:40:37.494070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.173 [2024-11-20 12:40:37.494075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.173 [2024-11-20 12:40:37.494624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.173 [2024-11-20 12:40:37.560299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:55.173 [2024-11-20 12:40:37.560530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 [2024-11-20 12:40:37.627354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.173 Malloc0 00:30:55.173 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 [2024-11-20 12:40:37.699408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=649818 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 649818 /var/tmp/bdevperf.sock 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 649818 ']' 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 [2024-11-20 12:40:37.751998] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:30:55.174 [2024-11-20 12:40:37.752039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649818 ] 00:30:55.174 [2024-11-20 12:40:37.827519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.174 [2024-11-20 12:40:37.869971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.174 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:55.174 NVMe0n1 00:30:55.174 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.174 12:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:55.174 Running I/O for 10 seconds... 00:30:57.491 11412.00 IOPS, 44.58 MiB/s [2024-11-20T11:40:41.545Z] 11782.00 IOPS, 46.02 MiB/s [2024-11-20T11:40:42.483Z] 11942.33 IOPS, 46.65 MiB/s [2024-11-20T11:40:43.420Z] 12033.00 IOPS, 47.00 MiB/s [2024-11-20T11:40:44.357Z] 12079.40 IOPS, 47.19 MiB/s [2024-11-20T11:40:45.736Z] 12110.17 IOPS, 47.31 MiB/s [2024-11-20T11:40:46.674Z] 12140.29 IOPS, 47.42 MiB/s [2024-11-20T11:40:47.611Z] 12159.38 IOPS, 47.50 MiB/s [2024-11-20T11:40:48.550Z] 12175.22 IOPS, 47.56 MiB/s [2024-11-20T11:40:48.550Z] 12185.70 IOPS, 47.60 MiB/s 00:31:05.434 Latency(us) 00:31:05.434 [2024-11-20T11:40:48.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:05.434 Verification LBA range: start 0x0 length 0x4000 00:31:05.434 NVMe0n1 : 10.05 12223.52 47.75 0.00 0.00 83510.30 15500.69 54252.41 00:31:05.434 [2024-11-20T11:40:48.550Z] =================================================================================================================== 00:31:05.434 [2024-11-20T11:40:48.550Z] Total : 12223.52 47.75 0.00 0.00 83510.30 15500.69 54252.41 00:31:05.434 { 00:31:05.434 "results": [ 00:31:05.434 { 00:31:05.434 "job": "NVMe0n1", 00:31:05.434 "core_mask": "0x1", 00:31:05.434 "workload": "verify", 00:31:05.434 "status": "finished", 00:31:05.434 "verify_range": { 00:31:05.434 "start": 0, 00:31:05.434 "length": 16384 00:31:05.434 }, 00:31:05.434 "queue_depth": 1024, 00:31:05.434 "io_size": 4096, 00:31:05.434 "runtime": 10.051606, 00:31:05.434 "iops": 12223.519306268074, 00:31:05.434 "mibps": 47.74812229010966, 00:31:05.434 "io_failed": 0, 00:31:05.434 "io_timeout": 0, 00:31:05.434 "avg_latency_us": 83510.29921541955, 00:31:05.434 "min_latency_us": 15500.688695652174, 00:31:05.434 "max_latency_us": 54252.41043478261 00:31:05.434 } 00:31:05.434 ], 00:31:05.434 "core_count": 1 00:31:05.434 } 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 649818 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 649818 ']' 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 649818 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649818 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.434 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649818' 00:31:05.434 killing process with pid 649818 00:31:05.435 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 649818 00:31:05.435 Received shutdown signal, test time was about 10.000000 seconds 00:31:05.435 00:31:05.435 Latency(us) 00:31:05.435 [2024-11-20T11:40:48.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.435 [2024-11-20T11:40:48.551Z] =================================================================================================================== 00:31:05.435 [2024-11-20T11:40:48.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.435 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 649818 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.695 rmmod nvme_tcp 00:31:05.695 rmmod nvme_fabrics 00:31:05.695 rmmod nvme_keyring 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 649732 ']' 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 649732 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 649732 ']' 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 649732 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649732 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649732' 00:31:05.695 killing process with pid 649732 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 649732 00:31:05.695 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 649732 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.954 12:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.860 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.860 00:31:07.860 real 0m19.789s 00:31:07.860 user 0m22.819s 00:31:07.860 sys 0m6.377s 00:31:07.860 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.860 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:07.860 ************************************ 00:31:07.860 END TEST nvmf_queue_depth 00:31:07.860 ************************************ 00:31:08.120 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:08.120 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.120 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.120 12:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.120 ************************************ 00:31:08.120 START TEST nvmf_target_multipath 00:31:08.120 ************************************ 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:08.120 * Looking for test storage... 00:31:08.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.120 --rc genhtml_branch_coverage=1 00:31:08.120 --rc genhtml_function_coverage=1 00:31:08.120 --rc genhtml_legend=1 00:31:08.120 --rc geninfo_all_blocks=1 00:31:08.120 --rc geninfo_unexecuted_blocks=1 00:31:08.120 00:31:08.120 ' 00:31:08.120 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.120 --rc genhtml_branch_coverage=1 00:31:08.120 --rc genhtml_function_coverage=1 00:31:08.120 --rc genhtml_legend=1 00:31:08.121 --rc geninfo_all_blocks=1 00:31:08.121 --rc geninfo_unexecuted_blocks=1 00:31:08.121 00:31:08.121 ' 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.121 --rc genhtml_branch_coverage=1 00:31:08.121 --rc genhtml_function_coverage=1 00:31:08.121 --rc genhtml_legend=1 00:31:08.121 --rc geninfo_all_blocks=1 00:31:08.121 --rc geninfo_unexecuted_blocks=1 00:31:08.121 00:31:08.121 ' 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.121 --rc genhtml_branch_coverage=1 00:31:08.121 --rc genhtml_function_coverage=1 00:31:08.121 --rc genhtml_legend=1 00:31:08.121 --rc geninfo_all_blocks=1 00:31:08.121 --rc geninfo_unexecuted_blocks=1 00:31:08.121 00:31:08.121 ' 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:08.121 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:08.380 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.381 12:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:14.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.955 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:14.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:14.956 Found net devices under 0000:86:00.0: cvl_0_0 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:14.956 Found net devices under 0000:86:00.1: cvl_0_1 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.956 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:31:14.956 00:31:14.956 --- 10.0.0.2 ping statistics --- 00:31:14.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.956 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:31:14.956 00:31:14.956 --- 10.0.0.1 ping statistics --- 00:31:14.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.956 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:14.956 only one NIC for nvmf test 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.956 rmmod nvme_tcp 00:31:14.956 rmmod nvme_fabrics 00:31:14.956 rmmod nvme_keyring 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.956 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.957 12:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.335 00:31:16.335 real 0m8.324s 00:31:16.335 user 0m1.820s 00:31:16.335 sys 0m4.527s 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:16.335 ************************************ 00:31:16.335 END TEST nvmf_target_multipath 00:31:16.335 ************************************ 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.335 ************************************ 00:31:16.335 START TEST nvmf_zcopy 00:31:16.335 ************************************ 00:31:16.335 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:16.593 * Looking for test storage... 00:31:16.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:16.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.593 --rc genhtml_branch_coverage=1 00:31:16.593 --rc genhtml_function_coverage=1 00:31:16.593 --rc genhtml_legend=1 00:31:16.593 --rc geninfo_all_blocks=1 00:31:16.593 --rc geninfo_unexecuted_blocks=1 00:31:16.593 00:31:16.593 ' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:16.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.593 --rc genhtml_branch_coverage=1 00:31:16.593 --rc genhtml_function_coverage=1 00:31:16.593 --rc genhtml_legend=1 00:31:16.593 --rc geninfo_all_blocks=1 00:31:16.593 --rc geninfo_unexecuted_blocks=1 00:31:16.593 00:31:16.593 ' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:16.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.593 --rc genhtml_branch_coverage=1 00:31:16.593 --rc genhtml_function_coverage=1 00:31:16.593 --rc genhtml_legend=1 00:31:16.593 --rc geninfo_all_blocks=1 00:31:16.593 --rc geninfo_unexecuted_blocks=1 00:31:16.593 00:31:16.593 ' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:16.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.593 --rc genhtml_branch_coverage=1 00:31:16.593 --rc genhtml_function_coverage=1 00:31:16.593 --rc genhtml_legend=1 00:31:16.593 --rc geninfo_all_blocks=1 00:31:16.593 --rc geninfo_unexecuted_blocks=1 00:31:16.593 00:31:16.593 ' 00:31:16.593 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.594 12:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.169 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.169 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.169 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.169 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.169 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:23.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:23.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:23.170 Found net devices under 0000:86:00.0: cvl_0_0 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:23.170 Found net devices under 0000:86:00.1: cvl_0_1 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:31:23.170 00:31:23.170 --- 10.0.0.2 ping statistics --- 00:31:23.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.170 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:31:23.170 00:31:23.170 --- 10.0.0.1 ping statistics --- 00:31:23.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.170 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:23.170 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=658462 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 658462 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 658462 ']' 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 [2024-11-20 12:41:05.614987] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:23.171 [2024-11-20 12:41:05.615975] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:31:23.171 [2024-11-20 12:41:05.616015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.171 [2024-11-20 12:41:05.697822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.171 [2024-11-20 12:41:05.738603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.171 [2024-11-20 12:41:05.738638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.171 [2024-11-20 12:41:05.738645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.171 [2024-11-20 12:41:05.738652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.171 [2024-11-20 12:41:05.738657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.171 [2024-11-20 12:41:05.739175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.171 [2024-11-20 12:41:05.806889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.171 [2024-11-20 12:41:05.807121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 [2024-11-20 12:41:05.883785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 [2024-11-20 12:41:05.908008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 malloc0 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:23.171 { 00:31:23.171 "params": { 00:31:23.171 "name": "Nvme$subsystem", 00:31:23.171 "trtype": "$TEST_TRANSPORT", 00:31:23.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.171 "adrfam": "ipv4", 00:31:23.171 "trsvcid": "$NVMF_PORT", 00:31:23.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.171 "hdgst": ${hdgst:-false}, 00:31:23.171 "ddgst": ${ddgst:-false} 00:31:23.171 }, 00:31:23.171 "method": "bdev_nvme_attach_controller" 00:31:23.171 } 00:31:23.171 EOF 00:31:23.171 )") 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:23.171 12:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:23.171 "params": { 00:31:23.171 "name": "Nvme1", 00:31:23.171 "trtype": "tcp", 00:31:23.171 "traddr": "10.0.0.2", 00:31:23.171 "adrfam": "ipv4", 00:31:23.171 "trsvcid": "4420", 00:31:23.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.171 "hdgst": false, 00:31:23.171 "ddgst": false 00:31:23.171 }, 00:31:23.171 "method": "bdev_nvme_attach_controller" 00:31:23.171 }' 00:31:23.171 [2024-11-20 12:41:05.998190] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:31:23.171 [2024-11-20 12:41:05.998231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658486 ] 00:31:23.171 [2024-11-20 12:41:06.073453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.171 [2024-11-20 12:41:06.118784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.485 Running I/O for 10 seconds... 00:31:25.462 8257.00 IOPS, 64.51 MiB/s [2024-11-20T11:41:09.513Z] 8353.50 IOPS, 65.26 MiB/s [2024-11-20T11:41:10.448Z] 8380.33 IOPS, 65.47 MiB/s [2024-11-20T11:41:11.826Z] 8367.25 IOPS, 65.37 MiB/s [2024-11-20T11:41:12.764Z] 8390.80 IOPS, 65.55 MiB/s [2024-11-20T11:41:13.702Z] 8410.17 IOPS, 65.70 MiB/s [2024-11-20T11:41:14.640Z] 8420.14 IOPS, 65.78 MiB/s [2024-11-20T11:41:15.576Z] 8427.00 IOPS, 65.84 MiB/s [2024-11-20T11:41:16.513Z] 8429.89 IOPS, 65.86 MiB/s [2024-11-20T11:41:16.513Z] 8443.60 IOPS, 65.97 MiB/s 00:31:33.397 Latency(us) 00:31:33.397 [2024-11-20T11:41:16.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:33.397 Verification LBA range: start 0x0 length 0x1000 00:31:33.397 Nvme1n1 : 10.01 8443.26 65.96 0.00 0.00 15117.07 2151.29 21883.33 00:31:33.397 [2024-11-20T11:41:16.513Z] =================================================================================================================== 00:31:33.397 [2024-11-20T11:41:16.513Z] Total : 8443.26 65.96 0.00 0.00 15117.07 2151.29 21883.33 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=660297 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:33.657 { 00:31:33.657 "params": { 00:31:33.657 "name": "Nvme$subsystem", 00:31:33.657 "trtype": "$TEST_TRANSPORT", 00:31:33.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.657 "adrfam": "ipv4", 00:31:33.657 "trsvcid": "$NVMF_PORT", 00:31:33.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.657 "hdgst": ${hdgst:-false}, 00:31:33.657 "ddgst": ${ddgst:-false} 00:31:33.657 }, 00:31:33.657 "method": "bdev_nvme_attach_controller" 00:31:33.657 } 00:31:33.657 EOF 00:31:33.657 )") 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:33.657 [2024-11-20 12:41:16.595519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.595554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:33.657 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:33.657 "params": { 00:31:33.657 "name": "Nvme1", 00:31:33.657 "trtype": "tcp", 00:31:33.657 "traddr": "10.0.0.2", 00:31:33.657 "adrfam": "ipv4", 00:31:33.657 "trsvcid": "4420", 00:31:33.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.657 "hdgst": false, 00:31:33.657 "ddgst": false 00:31:33.657 }, 00:31:33.657 "method": "bdev_nvme_attach_controller" 00:31:33.657 }' 00:31:33.657 [2024-11-20 12:41:16.607482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.607496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.619477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.619488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.631479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.631489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.636097] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:31:33.657 [2024-11-20 12:41:16.636139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660297 ] 00:31:33.657 [2024-11-20 12:41:16.643477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.643488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.655475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.655485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.667477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.667487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.679476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.679486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.691478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.691490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.703474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.703484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.710605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.657 [2024-11-20 12:41:16.715475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.715485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.727478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.727494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.739476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.739486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.751490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.657 [2024-11-20 12:41:16.751510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.657 [2024-11-20 12:41:16.753359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.658 [2024-11-20 12:41:16.763481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.658 [2024-11-20 12:41:16.763495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.775484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.775505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.787483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.787499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.799478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.799491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.811479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.811491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.823476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.823486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.835488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.835507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.847490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.847510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.859483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.859499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.871481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.871496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.883494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.883509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.895478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.895488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.907475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.907484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.919478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.919490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.931476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.931489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.943474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.943484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.955474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.955484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.967477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.967490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.979476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.979491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:16.991477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:16.991488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.917 [2024-11-20 12:41:17.003474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.917 [2024-11-20 12:41:17.003485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.055220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.055240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.063478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.063491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 Running I/O for 5 seconds... 00:31:34.177 [2024-11-20 12:41:17.078051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.078071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.093413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.093434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.108268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.108288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.123473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.123496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.135323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.135343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.149618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.149638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.164791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.164811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.179792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.179811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.195932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.195956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.208480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.208498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.221059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.221079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.236332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.236352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.251635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.251655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.265127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.265146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.177 [2024-11-20 12:41:17.279977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.177 [2024-11-20 12:41:17.279996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.295799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.295819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.311779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.311798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.327662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.327682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.339250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.339269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.353584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.353603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.368463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.368483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.383629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.383650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.396504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.396524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.411240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.411260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.422740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.422759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.437512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.437532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.452637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.452656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.467625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.467645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.479019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.479038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.493000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.493022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.508245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.508265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.523444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.523465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.534667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.534687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.436 [2024-11-20 12:41:17.549387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.436 [2024-11-20 12:41:17.549406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.564083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.564102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.579834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.579853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.595908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.595927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.611032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.611053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.625655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.625676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.640924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.640953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.656253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.656274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.672223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.672244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.684007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.684027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.697412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.697432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.712811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.712831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.727849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.727868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.739178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.739198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.753602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.753622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.768658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.768678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.783837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.783856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.695 [2024-11-20 12:41:17.799394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.695 [2024-11-20 12:41:17.799414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.813225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.813245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.828653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.828673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.843444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.843464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.856946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.856972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.872016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.872035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.884954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.884990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.900367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.900388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.915453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.915473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.929640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.929659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.944689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.944709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.959694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.959714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.972360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.972379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:17.988375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:17.988396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:18.003564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:18.003584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:18.017199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:18.017219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:18.031882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:18.031901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:18.046730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:18.046749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.955 [2024-11-20 12:41:18.060610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.955 [2024-11-20 12:41:18.060630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 16340.00 IOPS, 127.66 MiB/s [2024-11-20T11:41:18.331Z] [2024-11-20 12:41:18.075635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.075655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.086287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.086312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.101457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.101476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.116365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.116386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.131553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.131572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.145510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.145529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.160832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.160852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.175829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.175848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.191970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.192007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.205001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.205020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.220128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.220147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.236057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.236079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.248409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.248428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.263549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.263568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.274209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.274228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.289161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.289180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.304195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.304214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.215 [2024-11-20 12:41:18.319555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.215 [2024-11-20 12:41:18.319573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.333611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.333630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.348617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.348635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.363421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.363444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.376846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.376865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.387575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.387593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.401173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.401193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.416252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.416271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.431137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.431156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.444134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.444153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.474 [2024-11-20 12:41:18.457684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.474 [2024-11-20 12:41:18.457702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.472583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.472602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.487209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.487228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.501373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.501392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.516323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.516343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.531776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.531796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.545108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.545127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.560063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.560082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.572581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.572600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.475 [2024-11-20 12:41:18.585020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.475 [2024-11-20 12:41:18.585039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.600270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.600290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.615733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.615753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.628166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.628188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.641452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.641471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.656194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.656213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.667710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.667729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.682015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.682034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.696938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.696962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.711827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.711846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.727742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.727761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.741052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.741071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.756212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.756231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.770989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.771010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.782657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.782677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.797134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.797154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.812188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.812208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.827394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.827413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.735 [2024-11-20 12:41:18.838930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.735 [2024-11-20 12:41:18.838954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.853453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.853472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.868315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.868334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.879914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.879932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.893290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.893310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.908654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.908672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.924172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.924192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.939701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.939720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.950085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.950104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.964668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.964687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.979736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.979755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:18.990986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:18.991006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:19.005578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.005597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:19.020410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.020428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:19.036094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.036113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:19.052069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.052088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:19.067390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.067410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 16437.50 IOPS, 128.42 MiB/s [2024-11-20T11:41:19.110Z] [2024-11-20 12:41:19.081382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.081401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.994 [2024-11-20 12:41:19.096309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.994 [2024-11-20 12:41:19.096328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.111377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.111397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.122816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.122835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.137736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.137756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.152953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.152973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.167726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.167746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.179462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.179482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.193990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.194011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.209229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.209248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.224303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.224322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.239495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.239521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.251046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.251065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.256 [2024-11-20 12:41:19.265401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.256 [2024-11-20 12:41:19.265420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.280738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.280758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.295815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.295834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.311132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.311152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.322450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.322470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.337434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.337455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.353012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.353031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.257 [2024-11-20 12:41:19.367977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.257 [2024-11-20 12:41:19.367997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.383385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.383405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.395968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.395988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.411962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.411997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.427600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.427625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.438234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.438255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.453490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.453510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.468258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.468277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.483249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.483269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.496702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.496721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.512523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.512542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.527630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.527649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.538136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.538155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.553150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.553169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.568111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.568130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.583106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.583125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.597296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.597315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.612363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.612381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.516 [2024-11-20 12:41:19.627691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.516 [2024-11-20 12:41:19.627710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.775 [2024-11-20 12:41:19.640260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.775 [2024-11-20 12:41:19.640279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.775 [2024-11-20 12:41:19.655440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.655459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.668134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.668153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.681115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.681135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.696822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.696847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.711460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.711479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.724422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.724440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.739600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.739619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.751799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.751817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.765407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.765425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.780629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.780648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.795459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.795479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.808420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.808440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.819701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.819720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.833299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.833317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.848398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.848416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.863453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.863472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.874897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.874916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.776 [2024-11-20 12:41:19.889225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.776 [2024-11-20 12:41:19.889245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.904588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.904607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.919398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.919417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.933727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.933746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.948718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.948737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.963642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.963667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.977819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.977838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:19.992834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:19.992853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.007604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.007624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.020918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.020938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.036265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.036285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.053733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.053754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.069253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.069273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 16395.00 IOPS, 128.09 MiB/s [2024-11-20T11:41:20.151Z] [2024-11-20 12:41:20.084564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.084583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.099589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.099610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.112590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.112609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.127753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.127773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.035 [2024-11-20 12:41:20.139303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.035 [2024-11-20 12:41:20.139322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.153940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.153964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.169228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.169247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.184315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.184335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.199122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.199142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.210840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.210860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.225378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.225398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.240459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.240479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.255714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.255734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.266325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.266345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.281258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.281278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.296420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.296440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.311411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.311431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.324484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.324504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.340020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.340040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.355796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.355818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.371151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.371171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.384642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.384662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.295 [2024-11-20 12:41:20.399958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.295 [2024-11-20 12:41:20.399977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.416287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.416307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.431434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.431454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.442292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.442312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.457688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.457707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.472597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.472616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.487829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.487847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.500207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.500227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.513158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.513178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.528232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.528252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.540407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.540426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.555789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.555809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.571049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.571068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.585887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.585907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.600865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.600886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.615866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.615885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.631490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.631509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.645538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.645558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.555 [2024-11-20 12:41:20.660937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.555 [2024-11-20 12:41:20.660964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.676035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.676054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.691239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.691259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.704467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.704489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.719963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.719983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.731253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.731273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.745702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.745721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.760763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.760783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.775438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.775458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.786600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.786620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.801887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.801907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.817135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.817156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.814 [2024-11-20 12:41:20.832475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.814 [2024-11-20 12:41:20.832494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.815 [2024-11-20 12:41:20.848011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.815 [2024-11-20 12:41:20.848030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.815 [2024-11-20 12:41:20.863351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.815 [2024-11-20 12:41:20.863370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.815 [2024-11-20 12:41:20.877230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.815 [2024-11-20 12:41:20.877249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.815 [2024-11-20 12:41:20.892161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.815 [2024-11-20 12:41:20.892180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.815 [2024-11-20 12:41:20.903699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.815 [2024-11-20 12:41:20.903721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.815 [2024-11-20 12:41:20.917762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.815 [2024-11-20 12:41:20.917781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:20.933010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:20.933029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:20.948140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:20.948158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:20.960811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:20.960829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:20.976148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:20.976167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:20.991372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:20.991392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.002931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.002955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.017218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.017237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.032268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.032286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.047311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.047329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.060756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.060776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.072239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.072259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 16384.75 IOPS, 128.01 MiB/s [2024-11-20T11:41:21.190Z] [2024-11-20 12:41:21.085314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.085334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.100078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.100096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.110867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.110886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.125189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.125208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.140076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.140095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.155620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.155640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.167417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.167436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.074 [2024-11-20 12:41:21.181741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.074 [2024-11-20 12:41:21.181760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.196752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.196771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.211560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.211579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.225390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.225410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.240386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.240406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.255554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.255573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.268336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.268355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.283944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.283976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.299405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.299426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.313548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.313575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.328644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.328664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.343208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.343228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.356162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.356182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.369042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.369061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.384121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.384140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.399088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.399110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.413795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.413814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.428678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.428698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.333 [2024-11-20 12:41:21.443865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.333 [2024-11-20 12:41:21.443884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.455709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.455728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.469396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.469415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.484855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.484874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.499932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.499959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.515741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.515760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.529792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.529812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.545037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.545056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.560255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.560273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.576178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.576196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.591242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.591266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.604488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.593 [2024-11-20 12:41:21.604506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.593 [2024-11-20 12:41:21.620272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.620291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.594 [2024-11-20 12:41:21.635022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.635041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.594 [2024-11-20 12:41:21.648908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.648927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.594 [2024-11-20 12:41:21.664238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.664257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.594 [2024-11-20 12:41:21.677134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.677153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.594 [2024-11-20 12:41:21.692504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.692524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.594 [2024-11-20 12:41:21.707520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.594 [2024-11-20 12:41:21.707540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.721516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.721536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.737058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.737079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.752334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.752353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.767662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.767681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.780777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.780796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.791340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.791359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.805638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.805658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.820969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.820988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.835242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.835261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.848553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.848572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.863673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.863697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.875094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.875112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.889548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.889567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.904401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.904420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.918954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.918990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.930812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.930832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.946026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.946047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:38.853 [2024-11-20 12:41:21.961148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:38.853 [2024-11-20 12:41:21.961169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.113 [2024-11-20 12:41:21.976277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.113 [2024-11-20 12:41:21.976298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.113 [2024-11-20 12:41:21.991438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.113 [2024-11-20 12:41:21.991458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.113 [2024-11-20 12:41:22.004109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.004129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.017004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.017024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.032457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.032476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.047622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.047641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.060466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.060486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.075801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.075821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 16385.20 IOPS, 128.01 MiB/s [2024-11-20T11:41:22.230Z] [2024-11-20 12:41:22.088104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.088124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 00:31:39.114 Latency(us) 00:31:39.114 [2024-11-20T11:41:22.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.114 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:39.114 Nvme1n1 : 5.01 16386.74 128.02 0.00 0.00 7803.20 2236.77 13962.02 00:31:39.114 [2024-11-20T11:41:22.230Z] =================================================================================================================== 00:31:39.114 [2024-11-20T11:41:22.230Z] Total : 16386.74 128.02 0.00 0.00 7803.20 2236.77 13962.02 00:31:39.114 [2024-11-20 12:41:22.099486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.099503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.111489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.111503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.123491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.123510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.135483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.135497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.147484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.147499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.159481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.159496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.171477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.171491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.183478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.183492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.195480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.195496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.207479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.207489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.114 [2024-11-20 12:41:22.219483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.114 [2024-11-20 12:41:22.219495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.374 [2024-11-20 12:41:22.231479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.374 [2024-11-20 12:41:22.231490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.374 [2024-11-20 12:41:22.243473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:39.374 [2024-11-20 12:41:22.243484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:39.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (660297) - No such process 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 660297 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.374 delay0 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.374 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:39.374 [2024-11-20 12:41:22.388238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:45.941 Initializing NVMe Controllers 00:31:45.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.941 Initialization complete. Launching workers. 00:31:45.941 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 17838 00:31:45.941 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18003, failed to submit 101 00:31:45.941 success 17910, unsuccessful 93, failed 0 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.941 12:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.941 rmmod nvme_tcp 00:31:45.941 rmmod nvme_fabrics 00:31:45.941 rmmod nvme_keyring 00:31:45.941 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.941 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:45.941 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:45.941 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 658462 ']' 00:31:45.942 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 658462 00:31:45.942 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 658462 ']' 00:31:45.942 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 658462 00:31:45.942 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:45.942 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.942 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658462 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658462' 00:31:46.203 killing process with pid 658462 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 658462 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 658462 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.203 12:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.740 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.741 00:31:48.741 real 0m31.880s 00:31:48.741 user 0m41.322s 00:31:48.741 sys 0m12.556s 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:48.741 ************************************ 00:31:48.741 END TEST nvmf_zcopy 00:31:48.741 ************************************ 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:48.741 ************************************ 00:31:48.741 START TEST nvmf_nmic 00:31:48.741 ************************************ 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:48.741 * Looking for test storage... 00:31:48.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:48.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.741 --rc genhtml_branch_coverage=1 00:31:48.741 --rc genhtml_function_coverage=1 00:31:48.741 --rc genhtml_legend=1 00:31:48.741 --rc geninfo_all_blocks=1 00:31:48.741 --rc geninfo_unexecuted_blocks=1 00:31:48.741 00:31:48.741 ' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:48.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.741 --rc genhtml_branch_coverage=1 00:31:48.741 --rc genhtml_function_coverage=1 00:31:48.741 --rc genhtml_legend=1 00:31:48.741 --rc geninfo_all_blocks=1 00:31:48.741 --rc geninfo_unexecuted_blocks=1 00:31:48.741 00:31:48.741 ' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:48.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.741 --rc genhtml_branch_coverage=1 00:31:48.741 --rc genhtml_function_coverage=1 00:31:48.741 --rc genhtml_legend=1 00:31:48.741 --rc geninfo_all_blocks=1 00:31:48.741 --rc geninfo_unexecuted_blocks=1 00:31:48.741 00:31:48.741 ' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:48.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.741 --rc genhtml_branch_coverage=1 00:31:48.741 --rc genhtml_function_coverage=1 00:31:48.741 --rc genhtml_legend=1 00:31:48.741 --rc geninfo_all_blocks=1 00:31:48.741 --rc geninfo_unexecuted_blocks=1 00:31:48.741 00:31:48.741 ' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.741 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.742 12:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.316 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.316 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.316 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.317 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:31:55.317 00:31:55.317 --- 10.0.0.2 ping statistics --- 00:31:55.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.317 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:31:55.317 00:31:55.317 --- 10.0.0.1 ping statistics --- 00:31:55.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.317 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=665663 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 665663 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 665663 ']' 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.317 12:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.317 [2024-11-20 12:41:37.508818] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.317 [2024-11-20 12:41:37.509833] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:31:55.317 [2024-11-20 12:41:37.509872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.317 [2024-11-20 12:41:37.591129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:55.317 [2024-11-20 12:41:37.633144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.317 [2024-11-20 12:41:37.633186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.317 [2024-11-20 12:41:37.633193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.317 [2024-11-20 12:41:37.633201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.317 [2024-11-20 12:41:37.633206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.317 [2024-11-20 12:41:37.634657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.317 [2024-11-20 12:41:37.634769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.317 [2024-11-20 12:41:37.634874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.317 [2024-11-20 12:41:37.634875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.317 [2024-11-20 12:41:37.702665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.317 [2024-11-20 12:41:37.703774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:55.317 [2024-11-20 12:41:37.703911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.317 [2024-11-20 12:41:37.704213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.317 [2024-11-20 12:41:37.704249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.317 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.317 [2024-11-20 12:41:38.403642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.577 Malloc0 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.577 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.578 [2024-11-20 12:41:38.487902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:55.578 test case1: single bdev can't be used in multiple subsystems 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.578 [2024-11-20 12:41:38.511338] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:55.578 [2024-11-20 12:41:38.511360] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:55.578 [2024-11-20 12:41:38.511367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.578 request: 00:31:55.578 { 00:31:55.578 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:55.578 "namespace": { 00:31:55.578 "bdev_name": "Malloc0", 00:31:55.578 "no_auto_visible": false 00:31:55.578 }, 00:31:55.578 "method": "nvmf_subsystem_add_ns", 00:31:55.578 "req_id": 1 00:31:55.578 } 00:31:55.578 Got JSON-RPC error response 00:31:55.578 response: 00:31:55.578 { 00:31:55.578 "code": -32602, 00:31:55.578 "message": "Invalid parameters" 00:31:55.578 } 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:55.578 Adding namespace failed - expected result. 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:55.578 test case2: host connect to nvmf target in multiple paths 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:55.578 [2024-11-20 12:41:38.523436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.578 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:55.837 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:56.096 12:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:56.096 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:56.096 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:56.096 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:56.096 12:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:57.998 12:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:57.998 [global] 00:31:57.998 thread=1 00:31:57.998 invalidate=1 00:31:57.998 rw=write 00:31:57.998 time_based=1 00:31:57.998 runtime=1 00:31:57.998 ioengine=libaio 00:31:57.998 direct=1 00:31:57.998 bs=4096 00:31:57.998 iodepth=1 00:31:57.998 norandommap=0 00:31:57.998 numjobs=1 00:31:57.998 00:31:57.998 verify_dump=1 00:31:57.998 verify_backlog=512 00:31:57.998 verify_state_save=0 00:31:57.998 do_verify=1 00:31:57.998 verify=crc32c-intel 00:31:57.998 [job0] 00:31:57.998 filename=/dev/nvme0n1 00:31:57.998 Could not set queue depth (nvme0n1) 00:31:58.257 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:58.257 fio-3.35 00:31:58.257 Starting 1 thread 00:31:59.636 00:31:59.636 job0: (groupid=0, jobs=1): err= 0: pid=666504: Wed Nov 20 12:41:42 2024 00:31:59.636 read: IOPS=23, BW=94.1KiB/s (96.4kB/s)(96.0KiB/1020msec) 00:31:59.636 slat (nsec): min=7953, max=25961, avg=21675.12, stdev=4150.51 00:31:59.636 clat (usec): min=200, max=41031, avg=39263.57, stdev=8320.67 00:31:59.636 lat (usec): min=209, max=41055, avg=39285.25, stdev=8323.31 00:31:59.636 clat percentiles (usec): 00:31:59.636 | 1.00th=[ 200], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:59.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:59.636 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:59.636 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:59.636 | 99.99th=[41157] 00:31:59.636 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:31:59.636 slat (nsec): min=8297, max=41082, avg=10919.92, stdev=2269.16 00:31:59.636 clat (usec): min=107, max=298, avg=135.54, stdev= 8.63 00:31:59.636 lat (usec): min=131, max=339, avg=146.46, stdev= 9.98 00:31:59.636 clat percentiles (usec): 00:31:59.636 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:31:59.636 | 30.00th=[ 135], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:31:59.636 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 145], 00:31:59.636 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 297], 99.95th=[ 297], 00:31:59.636 | 99.99th=[ 297] 00:31:59.636 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:59.636 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:59.636 lat (usec) : 250=95.52%, 500=0.19% 00:31:59.636 lat (msec) : 50=4.29% 00:31:59.636 cpu : usr=0.29%, sys=0.98%, ctx=536, majf=0, minf=1 00:31:59.636 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.636 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.636 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:59.636 00:31:59.636 Run status group 0 (all jobs): 00:31:59.636 READ: bw=94.1KiB/s (96.4kB/s), 94.1KiB/s-94.1KiB/s (96.4kB/s-96.4kB/s), io=96.0KiB (98.3kB), run=1020-1020msec 00:31:59.636 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:31:59.636 00:31:59.636 Disk stats (read/write): 00:31:59.637 nvme0n1: ios=70/512, merge=0/0, ticks=837/67, in_queue=904, util=90.98% 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:59.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.637 rmmod nvme_tcp 00:31:59.637 rmmod nvme_fabrics 00:31:59.637 rmmod nvme_keyring 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 665663 ']' 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 665663 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 665663 ']' 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 665663 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.637 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 665663 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 665663' 00:31:59.895 killing process with pid 665663 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 665663 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 665663 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:59.895 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.896 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.896 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.896 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.896 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.896 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.896 12:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:02.428 00:32:02.428 real 0m13.658s 00:32:02.428 user 0m24.178s 00:32:02.428 sys 0m6.046s 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:02.428 ************************************ 00:32:02.428 END TEST nvmf_nmic 00:32:02.428 ************************************ 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.428 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:02.428 ************************************ 00:32:02.428 START TEST nvmf_fio_target 00:32:02.428 ************************************ 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:02.429 * Looking for test storage... 00:32:02.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:02.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.429 --rc genhtml_branch_coverage=1 00:32:02.429 --rc genhtml_function_coverage=1 00:32:02.429 --rc genhtml_legend=1 00:32:02.429 --rc geninfo_all_blocks=1 00:32:02.429 --rc geninfo_unexecuted_blocks=1 00:32:02.429 00:32:02.429 ' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:02.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.429 --rc genhtml_branch_coverage=1 00:32:02.429 --rc genhtml_function_coverage=1 00:32:02.429 --rc genhtml_legend=1 00:32:02.429 --rc geninfo_all_blocks=1 00:32:02.429 --rc geninfo_unexecuted_blocks=1 00:32:02.429 00:32:02.429 ' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:02.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.429 --rc genhtml_branch_coverage=1 00:32:02.429 --rc genhtml_function_coverage=1 00:32:02.429 --rc genhtml_legend=1 00:32:02.429 --rc geninfo_all_blocks=1 00:32:02.429 --rc geninfo_unexecuted_blocks=1 00:32:02.429 00:32:02.429 ' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:02.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:02.429 --rc genhtml_branch_coverage=1 00:32:02.429 --rc genhtml_function_coverage=1 00:32:02.429 --rc genhtml_legend=1 00:32:02.429 --rc geninfo_all_blocks=1 00:32:02.429 --rc geninfo_unexecuted_blocks=1 00:32:02.429 00:32:02.429 ' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.429 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:02.430 12:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:09.005 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:09.005 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.005 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:09.006 Found net devices under 0000:86:00.0: cvl_0_0 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:09.006 Found net devices under 0000:86:00.1: cvl_0_1 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.006 12:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:32:09.006 00:32:09.006 --- 10.0.0.2 ping statistics --- 00:32:09.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.006 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:32:09.006 00:32:09.006 --- 10.0.0.1 ping statistics --- 00:32:09.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.006 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=670133 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 670133 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 670133 ']' 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.006 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.006 [2024-11-20 12:41:51.311628] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.006 [2024-11-20 12:41:51.312642] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:32:09.006 [2024-11-20 12:41:51.312678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.006 [2024-11-20 12:41:51.391625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.006 [2024-11-20 12:41:51.432896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.006 [2024-11-20 12:41:51.432937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.007 [2024-11-20 12:41:51.432944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.007 [2024-11-20 12:41:51.432956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.007 [2024-11-20 12:41:51.432961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.007 [2024-11-20 12:41:51.434382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.007 [2024-11-20 12:41:51.434494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.007 [2024-11-20 12:41:51.434600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.007 [2024-11-20 12:41:51.434601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.007 [2024-11-20 12:41:51.502577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.007 [2024-11-20 12:41:51.503147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.007 [2024-11-20 12:41:51.503533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:09.007 [2024-11-20 12:41:51.503901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.007 [2024-11-20 12:41:51.503938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:09.007 [2024-11-20 12:41:51.751356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.007 12:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.007 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:09.007 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.266 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:09.266 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.525 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:09.525 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.785 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:09.785 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:09.785 12:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.044 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:10.044 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.303 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:10.303 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.562 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:10.562 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:10.562 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:10.821 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:10.821 12:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:11.080 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:11.080 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:11.339 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.339 [2024-11-20 12:41:54.439210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.598 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:11.598 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:11.856 12:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:12.115 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:12.115 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:12.115 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:12.115 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:12.115 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:12.115 12:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:14.647 12:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:14.648 [global] 00:32:14.648 thread=1 00:32:14.648 invalidate=1 00:32:14.648 rw=write 00:32:14.648 time_based=1 00:32:14.648 runtime=1 00:32:14.648 ioengine=libaio 00:32:14.648 direct=1 00:32:14.648 bs=4096 00:32:14.648 iodepth=1 00:32:14.648 norandommap=0 00:32:14.648 numjobs=1 00:32:14.648 00:32:14.648 verify_dump=1 00:32:14.648 verify_backlog=512 00:32:14.648 verify_state_save=0 00:32:14.648 do_verify=1 00:32:14.648 verify=crc32c-intel 00:32:14.648 [job0] 00:32:14.648 filename=/dev/nvme0n1 00:32:14.648 [job1] 00:32:14.648 filename=/dev/nvme0n2 00:32:14.648 [job2] 00:32:14.648 filename=/dev/nvme0n3 00:32:14.648 [job3] 00:32:14.648 filename=/dev/nvme0n4 00:32:14.648 Could not set queue depth (nvme0n1) 00:32:14.648 Could not set queue depth (nvme0n2) 00:32:14.648 Could not set queue depth (nvme0n3) 00:32:14.648 Could not set queue depth (nvme0n4) 00:32:14.648 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.648 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.648 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.648 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.648 fio-3.35 00:32:14.648 Starting 4 threads 00:32:16.026 00:32:16.026 job0: (groupid=0, jobs=1): err= 0: pid=671385: Wed Nov 20 12:41:58 2024 00:32:16.026 read: IOPS=337, BW=1351KiB/s (1383kB/s)(1352KiB/1001msec) 00:32:16.026 slat (nsec): min=9010, max=26265, avg=11091.11, stdev=1878.25 00:32:16.026 clat (usec): min=288, max=41067, avg=2634.24, stdev=9375.91 00:32:16.026 lat (usec): min=298, max=41079, avg=2645.33, stdev=9376.70 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 310], 00:32:16.026 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:32:16.026 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[41157], 00:32:16.026 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:16.026 | 99.99th=[41157] 00:32:16.026 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:16.026 slat (nsec): min=8422, max=52270, avg=13846.78, stdev=2568.89 00:32:16.026 clat (usec): min=142, max=398, avg=187.52, stdev=24.54 00:32:16.026 lat (usec): min=155, max=414, avg=201.36, stdev=25.35 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:32:16.026 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:32:16.026 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 219], 00:32:16.026 | 99.00th=[ 251], 99.50th=[ 379], 99.90th=[ 400], 99.95th=[ 400], 00:32:16.026 | 99.99th=[ 400] 00:32:16.026 bw ( KiB/s): min= 4096, max= 4096, per=13.32%, avg=4096.00, stdev= 0.00, samples=1 00:32:16.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:16.026 lat (usec) : 250=59.53%, 500=38.00%, 750=0.12% 00:32:16.026 lat (msec) : 10=0.12%, 50=2.24% 00:32:16.026 cpu : usr=0.60%, sys=1.80%, ctx=852, majf=0, minf=1 00:32:16.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 issued rwts: total=338,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.026 job1: (groupid=0, jobs=1): err= 0: pid=671386: Wed Nov 20 12:41:58 2024 00:32:16.026 read: IOPS=2077, BW=8312KiB/s (8511kB/s)(8320KiB/1001msec) 00:32:16.026 slat (nsec): min=7402, max=38487, avg=8782.02, stdev=1333.65 00:32:16.026 clat (usec): min=174, max=463, avg=241.85, stdev=32.07 00:32:16.026 lat (usec): min=182, max=471, avg=250.63, stdev=32.02 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:32:16.026 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:32:16.026 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 318], 00:32:16.026 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 408], 99.95th=[ 412], 00:32:16.026 | 99.99th=[ 465] 00:32:16.026 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:16.026 slat (nsec): min=9710, max=57249, avg=12575.87, stdev=2063.44 00:32:16.026 clat (usec): min=130, max=266, avg=168.68, stdev=17.20 00:32:16.026 lat (usec): min=144, max=314, avg=181.26, stdev=17.28 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:32:16.026 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:32:16.026 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 200], 00:32:16.026 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 265], 00:32:16.026 | 99.99th=[ 269] 00:32:16.026 bw ( KiB/s): min=10848, max=10848, per=35.28%, avg=10848.00, stdev= 0.00, samples=1 00:32:16.026 iops : min= 2712, max= 2712, avg=2712.00, stdev= 0.00, samples=1 00:32:16.026 lat (usec) : 250=90.00%, 500=10.00% 00:32:16.026 cpu : usr=4.50%, sys=7.20%, ctx=4641, majf=0, minf=1 00:32:16.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 issued rwts: total=2080,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.026 job2: (groupid=0, jobs=1): err= 0: pid=671387: Wed Nov 20 12:41:58 2024 00:32:16.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:16.026 slat (nsec): min=7668, max=40956, avg=9217.47, stdev=1538.73 00:32:16.026 clat (usec): min=204, max=40977, avg=262.40, stdev=900.24 00:32:16.026 lat (usec): min=214, max=40986, avg=271.62, stdev=900.25 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:32:16.026 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:32:16.026 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:32:16.026 | 99.00th=[ 277], 99.50th=[ 306], 99.90th=[ 453], 99.95th=[ 486], 00:32:16.026 | 99.99th=[41157] 00:32:16.026 write: IOPS=2239, BW=8959KiB/s (9174kB/s)(8968KiB/1001msec); 0 zone resets 00:32:16.026 slat (nsec): min=11170, max=42333, avg=12893.28, stdev=1948.44 00:32:16.026 clat (usec): min=142, max=285, avg=179.05, stdev=20.07 00:32:16.026 lat (usec): min=154, max=301, avg=191.95, stdev=20.55 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:32:16.026 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:32:16.026 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 217], 00:32:16.026 | 99.00th=[ 241], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 285], 00:32:16.026 | 99.99th=[ 285] 00:32:16.026 bw ( KiB/s): min=10008, max=10008, per=32.55%, avg=10008.00, stdev= 0.00, samples=1 00:32:16.026 iops : min= 2502, max= 2502, avg=2502.00, stdev= 0.00, samples=1 00:32:16.026 lat (usec) : 250=91.05%, 500=8.93% 00:32:16.026 lat (msec) : 50=0.02% 00:32:16.026 cpu : usr=3.70%, sys=7.30%, ctx=4292, majf=0, minf=1 00:32:16.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 issued rwts: total=2048,2242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.026 job3: (groupid=0, jobs=1): err= 0: pid=671388: Wed Nov 20 12:41:58 2024 00:32:16.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:16.026 slat (nsec): min=6852, max=18307, avg=7737.80, stdev=816.10 00:32:16.026 clat (usec): min=177, max=474, avg=264.41, stdev=37.18 00:32:16.026 lat (usec): min=185, max=482, avg=272.15, stdev=37.12 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 239], 00:32:16.026 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 265], 00:32:16.026 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 318], 00:32:16.026 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 469], 00:32:16.026 | 99.99th=[ 474] 00:32:16.026 write: IOPS=2377, BW=9510KiB/s (9739kB/s)(9520KiB/1001msec); 0 zone resets 00:32:16.026 slat (nsec): min=9867, max=37527, avg=11221.22, stdev=1497.33 00:32:16.026 clat (usec): min=128, max=401, avg=170.37, stdev=16.06 00:32:16.026 lat (usec): min=140, max=438, avg=181.59, stdev=16.42 00:32:16.026 clat percentiles (usec): 00:32:16.026 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:32:16.026 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:32:16.026 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:32:16.026 | 99.00th=[ 215], 99.50th=[ 223], 99.90th=[ 258], 99.95th=[ 269], 00:32:16.026 | 99.99th=[ 400] 00:32:16.026 bw ( KiB/s): min= 8624, max= 8624, per=28.05%, avg=8624.00, stdev= 0.00, samples=1 00:32:16.026 iops : min= 2156, max= 2156, avg=2156.00, stdev= 0.00, samples=1 00:32:16.026 lat (usec) : 250=75.32%, 500=24.68% 00:32:16.026 cpu : usr=1.90%, sys=4.70%, ctx=4429, majf=0, minf=1 00:32:16.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.026 issued rwts: total=2048,2380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:16.026 00:32:16.026 Run status group 0 (all jobs): 00:32:16.027 READ: bw=25.4MiB/s (26.7MB/s), 1351KiB/s-8312KiB/s (1383kB/s-8511kB/s), io=25.4MiB (26.7MB), run=1001-1001msec 00:32:16.027 WRITE: bw=30.0MiB/s (31.5MB/s), 2046KiB/s-9.99MiB/s (2095kB/s-10.5MB/s), io=30.1MiB (31.5MB), run=1001-1001msec 00:32:16.027 00:32:16.027 Disk stats (read/write): 00:32:16.027 nvme0n1: ios=75/512, merge=0/0, ticks=1751/91, in_queue=1842, util=98.00% 00:32:16.027 nvme0n2: ios=1903/2048, merge=0/0, ticks=1415/315, in_queue=1730, util=98.27% 00:32:16.027 nvme0n3: ios=1805/2048, merge=0/0, ticks=1402/349, in_queue=1751, util=98.33% 00:32:16.027 nvme0n4: ios=1748/2048, merge=0/0, ticks=1437/335, in_queue=1772, util=98.21% 00:32:16.027 12:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:16.027 [global] 00:32:16.027 thread=1 00:32:16.027 invalidate=1 00:32:16.027 rw=randwrite 00:32:16.027 time_based=1 00:32:16.027 runtime=1 00:32:16.027 ioengine=libaio 00:32:16.027 direct=1 00:32:16.027 bs=4096 00:32:16.027 iodepth=1 00:32:16.027 norandommap=0 00:32:16.027 numjobs=1 00:32:16.027 00:32:16.027 verify_dump=1 00:32:16.027 verify_backlog=512 00:32:16.027 verify_state_save=0 00:32:16.027 do_verify=1 00:32:16.027 verify=crc32c-intel 00:32:16.027 [job0] 00:32:16.027 filename=/dev/nvme0n1 00:32:16.027 [job1] 00:32:16.027 filename=/dev/nvme0n2 00:32:16.027 [job2] 00:32:16.027 filename=/dev/nvme0n3 00:32:16.027 [job3] 00:32:16.027 filename=/dev/nvme0n4 00:32:16.027 Could not set queue depth (nvme0n1) 00:32:16.027 Could not set queue depth (nvme0n2) 00:32:16.027 Could not set queue depth (nvme0n3) 00:32:16.027 Could not set queue depth (nvme0n4) 00:32:16.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:16.027 fio-3.35 00:32:16.027 Starting 4 threads 00:32:17.405 00:32:17.405 job0: (groupid=0, jobs=1): err= 0: pid=671759: Wed Nov 20 12:42:00 2024 00:32:17.405 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:32:17.405 slat (nsec): min=8330, max=53980, avg=10848.61, stdev=3810.63 00:32:17.405 clat (usec): min=191, max=41963, avg=1687.48, stdev=7512.35 00:32:17.405 lat (usec): min=200, max=41984, avg=1698.33, stdev=7514.50 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 237], 00:32:17.405 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 265], 00:32:17.405 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 277], 95.00th=[ 306], 00:32:17.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:17.405 | 99.99th=[42206] 00:32:17.405 write: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec); 0 zone resets 00:32:17.405 slat (nsec): min=9814, max=53841, avg=11563.10, stdev=3202.58 00:32:17.405 clat (usec): min=144, max=357, avg=188.44, stdev=28.20 00:32:17.405 lat (usec): min=154, max=390, avg=200.00, stdev=29.82 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:32:17.405 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:32:17.405 | 70.00th=[ 190], 80.00th=[ 202], 90.00th=[ 241], 95.00th=[ 249], 00:32:17.405 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 359], 99.95th=[ 359], 00:32:17.405 | 99.99th=[ 359] 00:32:17.405 bw ( KiB/s): min= 4096, max= 4096, per=22.19%, avg=4096.00, stdev= 0.00, samples=1 00:32:17.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:17.405 lat (usec) : 250=73.65%, 500=24.69% 00:32:17.405 lat (msec) : 2=0.09%, 50=1.57% 00:32:17.405 cpu : usr=1.60%, sys=0.90%, ctx=1146, majf=0, minf=1 00:32:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.405 issued rwts: total=512,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.405 job1: (groupid=0, jobs=1): err= 0: pid=671761: Wed Nov 20 12:42:00 2024 00:32:17.405 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:32:17.405 slat (nsec): min=9697, max=35133, avg=21806.00, stdev=6000.44 00:32:17.405 clat (usec): min=40758, max=41091, avg=40959.75, stdev=84.10 00:32:17.405 lat (usec): min=40783, max=41114, avg=40981.55, stdev=83.43 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:17.405 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:17.405 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:17.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:17.405 | 99.99th=[41157] 00:32:17.405 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:32:17.405 slat (nsec): min=10273, max=45128, avg=12972.98, stdev=2985.20 00:32:17.405 clat (usec): min=143, max=282, avg=186.16, stdev=19.71 00:32:17.405 lat (usec): min=162, max=327, avg=199.14, stdev=20.29 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:32:17.405 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:32:17.405 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 235], 00:32:17.405 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 281], 99.95th=[ 281], 00:32:17.405 | 99.99th=[ 281] 00:32:17.405 bw ( KiB/s): min= 4096, max= 4096, per=22.19%, avg=4096.00, stdev= 0.00, samples=1 00:32:17.405 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:17.405 lat (usec) : 250=95.13%, 500=0.75% 00:32:17.405 lat (msec) : 50=4.12% 00:32:17.405 cpu : usr=1.00%, sys=0.40%, ctx=534, majf=0, minf=2 00:32:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.405 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.405 job2: (groupid=0, jobs=1): err= 0: pid=671762: Wed Nov 20 12:42:00 2024 00:32:17.405 read: IOPS=2391, BW=9566KiB/s (9796kB/s)(9576KiB/1001msec) 00:32:17.405 slat (nsec): min=6542, max=28542, avg=7668.93, stdev=1380.95 00:32:17.405 clat (usec): min=176, max=557, avg=233.45, stdev=24.05 00:32:17.405 lat (usec): min=184, max=565, avg=241.12, stdev=24.32 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:32:17.405 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 241], 60.00th=[ 245], 00:32:17.405 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:32:17.405 | 99.00th=[ 285], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 494], 00:32:17.405 | 99.99th=[ 562] 00:32:17.405 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:17.405 slat (nsec): min=9276, max=39296, avg=10511.58, stdev=1458.36 00:32:17.405 clat (usec): min=118, max=359, avg=150.18, stdev=26.59 00:32:17.405 lat (usec): min=128, max=398, avg=160.69, stdev=26.94 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 131], 00:32:17.405 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:32:17.405 | 70.00th=[ 153], 80.00th=[ 172], 90.00th=[ 196], 95.00th=[ 204], 00:32:17.405 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 269], 99.95th=[ 334], 00:32:17.405 | 99.99th=[ 359] 00:32:17.405 bw ( KiB/s): min=12248, max=12248, per=66.35%, avg=12248.00, stdev= 0.00, samples=1 00:32:17.405 iops : min= 3062, max= 3062, avg=3062.00, stdev= 0.00, samples=1 00:32:17.405 lat (usec) : 250=91.87%, 500=8.11%, 750=0.02% 00:32:17.405 cpu : usr=2.90%, sys=4.30%, ctx=4956, majf=0, minf=1 00:32:17.405 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.405 issued rwts: total=2394,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.405 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.405 job3: (groupid=0, jobs=1): err= 0: pid=671763: Wed Nov 20 12:42:00 2024 00:32:17.405 read: IOPS=602, BW=2412KiB/s (2470kB/s)(2472KiB/1025msec) 00:32:17.405 slat (nsec): min=6660, max=25619, avg=8051.93, stdev=2766.40 00:32:17.405 clat (usec): min=175, max=42002, avg=1345.69, stdev=6638.37 00:32:17.405 lat (usec): min=183, max=42024, avg=1353.74, stdev=6639.11 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 196], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 219], 00:32:17.405 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:32:17.405 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 281], 00:32:17.405 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:17.405 | 99.99th=[42206] 00:32:17.405 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:32:17.405 slat (nsec): min=9349, max=44314, avg=10396.95, stdev=1576.07 00:32:17.405 clat (usec): min=127, max=286, avg=170.00, stdev=23.28 00:32:17.405 lat (usec): min=137, max=315, avg=180.39, stdev=23.68 00:32:17.405 clat percentiles (usec): 00:32:17.405 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:32:17.405 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 176], 00:32:17.405 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:32:17.405 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 258], 99.95th=[ 285], 00:32:17.405 | 99.99th=[ 285] 00:32:17.405 bw ( KiB/s): min= 8192, max= 8192, per=44.38%, avg=8192.00, stdev= 0.00, samples=1 00:32:17.405 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:17.405 lat (usec) : 250=95.19%, 500=3.78% 00:32:17.405 lat (msec) : 50=1.04% 00:32:17.405 cpu : usr=0.68%, sys=1.66%, ctx=1644, majf=0, minf=1 00:32:17.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.406 issued rwts: total=618,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:17.406 00:32:17.406 Run status group 0 (all jobs): 00:32:17.406 READ: bw=13.5MiB/s (14.2MB/s), 87.5KiB/s-9566KiB/s (89.6kB/s-9796kB/s), io=13.9MiB (14.5MB), run=1001-1025msec 00:32:17.406 WRITE: bw=18.0MiB/s (18.9MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=18.5MiB (19.4MB), run=1001-1025msec 00:32:17.406 00:32:17.406 Disk stats (read/write): 00:32:17.406 nvme0n1: ios=125/512, merge=0/0, ticks=766/92, in_queue=858, util=86.67% 00:32:17.406 nvme0n2: ios=33/512, merge=0/0, ticks=753/87, in_queue=840, util=87.11% 00:32:17.406 nvme0n3: ios=2104/2134, merge=0/0, ticks=1573/321, in_queue=1894, util=98.44% 00:32:17.406 nvme0n4: ios=667/1024, merge=0/0, ticks=1426/175, in_queue=1601, util=98.43% 00:32:17.406 12:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:17.406 [global] 00:32:17.406 thread=1 00:32:17.406 invalidate=1 00:32:17.406 rw=write 00:32:17.406 time_based=1 00:32:17.406 runtime=1 00:32:17.406 ioengine=libaio 00:32:17.406 direct=1 00:32:17.406 bs=4096 00:32:17.406 iodepth=128 00:32:17.406 norandommap=0 00:32:17.406 numjobs=1 00:32:17.406 00:32:17.406 verify_dump=1 00:32:17.406 verify_backlog=512 00:32:17.406 verify_state_save=0 00:32:17.406 do_verify=1 00:32:17.406 verify=crc32c-intel 00:32:17.406 [job0] 00:32:17.406 filename=/dev/nvme0n1 00:32:17.406 [job1] 00:32:17.406 filename=/dev/nvme0n2 00:32:17.406 [job2] 00:32:17.406 filename=/dev/nvme0n3 00:32:17.406 [job3] 00:32:17.406 filename=/dev/nvme0n4 00:32:17.406 Could not set queue depth (nvme0n1) 00:32:17.406 Could not set queue depth (nvme0n2) 00:32:17.406 Could not set queue depth (nvme0n3) 00:32:17.406 Could not set queue depth (nvme0n4) 00:32:17.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.665 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.665 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.665 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.665 fio-3.35 00:32:17.665 Starting 4 threads 00:32:19.185 00:32:19.185 job0: (groupid=0, jobs=1): err= 0: pid=672171: Wed Nov 20 12:42:01 2024 00:32:19.185 read: IOPS=4157, BW=16.2MiB/s (17.0MB/s)(17.0MiB/1046msec) 00:32:19.185 slat (nsec): min=1334, max=18469k, avg=106477.96, stdev=860390.87 00:32:19.185 clat (usec): min=3876, max=65778, avg=15887.41, stdev=13540.36 00:32:19.185 lat (usec): min=3887, max=65789, avg=15993.89, stdev=13619.70 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 5538], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7635], 00:32:19.185 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:32:19.185 | 70.00th=[11863], 80.00th=[27657], 90.00th=[39060], 95.00th=[47449], 00:32:19.185 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:32:19.185 | 99.99th=[65799] 00:32:19.185 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:32:19.185 slat (usec): min=2, max=21815, avg=84.59, stdev=739.03 00:32:19.185 clat (usec): min=1094, max=73872, avg=12400.55, stdev=11349.64 00:32:19.185 lat (usec): min=1107, max=73886, avg=12485.14, stdev=11428.83 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 3163], 5.00th=[ 5080], 10.00th=[ 5997], 20.00th=[ 7242], 00:32:19.185 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:32:19.185 | 70.00th=[ 9896], 80.00th=[13829], 90.00th=[26346], 95.00th=[41157], 00:32:19.185 | 99.00th=[60031], 99.50th=[64750], 99.90th=[73925], 99.95th=[73925], 00:32:19.185 | 99.99th=[73925] 00:32:19.185 bw ( KiB/s): min=16384, max=24576, per=32.69%, avg=20480.00, stdev=5792.62, samples=2 00:32:19.185 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:32:19.185 lat (msec) : 2=0.03%, 4=1.07%, 10=65.49%, 20=14.93%, 50=15.44% 00:32:19.185 lat (msec) : 100=3.04% 00:32:19.185 cpu : usr=3.73%, sys=5.26%, ctx=350, majf=0, minf=2 00:32:19.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:19.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.185 issued rwts: total=4349,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.185 job1: (groupid=0, jobs=1): err= 0: pid=672172: Wed Nov 20 12:42:01 2024 00:32:19.185 read: IOPS=3071, BW=12.0MiB/s (12.6MB/s)(12.5MiB/1044msec) 00:32:19.185 slat (nsec): min=1546, max=15633k, avg=135707.14, stdev=821652.39 00:32:19.185 clat (usec): min=3933, max=99277, avg=19926.58, stdev=16154.49 00:32:19.185 lat (msec): min=3, max=105, avg=20.06, stdev=16.21 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 4490], 5.00th=[ 7963], 10.00th=[ 9372], 20.00th=[10290], 00:32:19.185 | 30.00th=[11469], 40.00th=[14877], 50.00th=[17171], 60.00th=[18744], 00:32:19.185 | 70.00th=[20317], 80.00th=[20579], 90.00th=[27657], 95.00th=[58983], 00:32:19.185 | 99.00th=[99091], 99.50th=[99091], 99.90th=[99091], 99.95th=[99091], 00:32:19.185 | 99.99th=[99091] 00:32:19.185 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:32:19.185 slat (usec): min=2, max=49289, avg=151.16, stdev=1235.96 00:32:19.185 clat (usec): min=3925, max=90909, avg=19021.40, stdev=11767.98 00:32:19.185 lat (usec): min=3935, max=90916, avg=19172.56, stdev=11845.86 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9372], 00:32:19.185 | 30.00th=[10814], 40.00th=[16909], 50.00th=[17695], 60.00th=[20579], 00:32:19.185 | 70.00th=[21103], 80.00th=[21627], 90.00th=[30278], 95.00th=[38011], 00:32:19.185 | 99.00th=[90702], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:32:19.185 | 99.99th=[90702] 00:32:19.185 bw ( KiB/s): min=12288, max=16384, per=22.88%, avg=14336.00, stdev=2896.31, samples=2 00:32:19.185 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:32:19.185 lat (msec) : 4=0.44%, 10=20.20%, 20=39.41%, 50=36.17%, 100=3.78% 00:32:19.185 cpu : usr=2.21%, sys=5.18%, ctx=325, majf=0, minf=1 00:32:19.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:19.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.185 issued rwts: total=3207,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.185 job2: (groupid=0, jobs=1): err= 0: pid=672176: Wed Nov 20 12:42:01 2024 00:32:19.185 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(14.4MiB/1046msec) 00:32:19.185 slat (nsec): min=1403, max=14708k, avg=128762.59, stdev=942995.56 00:32:19.185 clat (usec): min=4128, max=62547, avg=17650.03, stdev=9040.31 00:32:19.185 lat (usec): min=4140, max=62557, avg=17778.79, stdev=9094.54 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[11076], 00:32:19.185 | 30.00th=[12649], 40.00th=[13304], 50.00th=[15139], 60.00th=[17433], 00:32:19.185 | 70.00th=[20579], 80.00th=[21103], 90.00th=[25035], 95.00th=[32900], 00:32:19.185 | 99.00th=[54789], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:32:19.185 | 99.99th=[62653] 00:32:19.185 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:32:19.185 slat (usec): min=2, max=14214, avg=119.28, stdev=869.75 00:32:19.185 clat (usec): min=1199, max=62485, avg=16518.94, stdev=10293.36 00:32:19.185 lat (usec): min=1210, max=62490, avg=16638.22, stdev=10365.00 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 4752], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9241], 00:32:19.185 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12780], 60.00th=[14484], 00:32:19.185 | 70.00th=[19268], 80.00th=[20579], 90.00th=[28181], 95.00th=[41157], 00:32:19.185 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:32:19.185 | 99.99th=[62653] 00:32:19.185 bw ( KiB/s): min=14016, max=18443, per=25.90%, avg=16229.50, stdev=3130.36, samples=2 00:32:19.185 iops : min= 3504, max= 4610, avg=4057.00, stdev=782.06, samples=2 00:32:19.185 lat (msec) : 2=0.12%, 4=0.27%, 10=15.27%, 20=52.39%, 50=28.83% 00:32:19.185 lat (msec) : 100=3.13% 00:32:19.185 cpu : usr=3.54%, sys=4.98%, ctx=212, majf=0, minf=1 00:32:19.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:19.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.185 issued rwts: total=3678,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.185 job3: (groupid=0, jobs=1): err= 0: pid=672177: Wed Nov 20 12:42:01 2024 00:32:19.185 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1003msec) 00:32:19.185 slat (nsec): min=1138, max=14335k, avg=113976.72, stdev=792871.14 00:32:19.185 clat (usec): min=1984, max=67391, avg=14386.29, stdev=7016.73 00:32:19.185 lat (usec): min=2378, max=67399, avg=14500.27, stdev=7091.63 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 8455], 20.00th=[10159], 00:32:19.185 | 30.00th=[11207], 40.00th=[12780], 50.00th=[13435], 60.00th=[15008], 00:32:19.185 | 70.00th=[16581], 80.00th=[17433], 90.00th=[19530], 95.00th=[21103], 00:32:19.185 | 99.00th=[53740], 99.50th=[60556], 99.90th=[67634], 99.95th=[67634], 00:32:19.185 | 99.99th=[67634] 00:32:19.185 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:32:19.185 slat (usec): min=2, max=42745, avg=146.83, stdev=1247.54 00:32:19.185 clat (usec): min=856, max=93452, avg=20194.94, stdev=16949.95 00:32:19.185 lat (usec): min=1769, max=93465, avg=20341.77, stdev=17066.38 00:32:19.185 clat percentiles (usec): 00:32:19.185 | 1.00th=[ 6063], 5.00th=[ 7308], 10.00th=[ 9110], 20.00th=[10290], 00:32:19.185 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13042], 60.00th=[15664], 00:32:19.185 | 70.00th=[17957], 80.00th=[21103], 90.00th=[54264], 95.00th=[55837], 00:32:19.185 | 99.00th=[86508], 99.50th=[88605], 99.90th=[93848], 99.95th=[93848], 00:32:19.185 | 99.99th=[93848] 00:32:19.185 bw ( KiB/s): min=12288, max=16351, per=22.85%, avg=14319.50, stdev=2872.97, samples=2 00:32:19.185 iops : min= 3072, max= 4087, avg=3579.50, stdev=717.71, samples=2 00:32:19.185 lat (usec) : 1000=0.01% 00:32:19.185 lat (msec) : 2=0.11%, 4=0.25%, 10=17.68%, 20=66.19%, 50=9.13% 00:32:19.185 lat (msec) : 100=6.63% 00:32:19.185 cpu : usr=2.59%, sys=4.19%, ctx=283, majf=0, minf=1 00:32:19.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:19.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.185 issued rwts: total=3570,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.185 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.185 00:32:19.185 Run status group 0 (all jobs): 00:32:19.185 READ: bw=55.3MiB/s (58.0MB/s), 12.0MiB/s-16.2MiB/s (12.6MB/s-17.0MB/s), io=57.8MiB (60.6MB), run=1003-1046msec 00:32:19.185 WRITE: bw=61.2MiB/s (64.2MB/s), 13.4MiB/s-19.1MiB/s (14.1MB/s-20.0MB/s), io=64.0MiB (67.1MB), run=1003-1046msec 00:32:19.185 00:32:19.185 Disk stats (read/write): 00:32:19.185 nvme0n1: ios=2821/3584, merge=0/0, ticks=29313/32563, in_queue=61876, util=82.06% 00:32:19.185 nvme0n2: ios=2796/3072, merge=0/0, ticks=17097/20771, in_queue=37868, util=97.43% 00:32:19.185 nvme0n3: ios=3130/3472, merge=0/0, ticks=44473/46792, in_queue=91265, util=97.51% 00:32:19.185 nvme0n4: ios=3107/3266, merge=0/0, ticks=26856/28624, in_queue=55480, util=96.03% 00:32:19.185 12:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:19.185 [global] 00:32:19.185 thread=1 00:32:19.185 invalidate=1 00:32:19.185 rw=randwrite 00:32:19.185 time_based=1 00:32:19.185 runtime=1 00:32:19.185 ioengine=libaio 00:32:19.185 direct=1 00:32:19.185 bs=4096 00:32:19.185 iodepth=128 00:32:19.185 norandommap=0 00:32:19.185 numjobs=1 00:32:19.185 00:32:19.185 verify_dump=1 00:32:19.185 verify_backlog=512 00:32:19.185 verify_state_save=0 00:32:19.185 do_verify=1 00:32:19.185 verify=crc32c-intel 00:32:19.185 [job0] 00:32:19.185 filename=/dev/nvme0n1 00:32:19.185 [job1] 00:32:19.185 filename=/dev/nvme0n2 00:32:19.185 [job2] 00:32:19.185 filename=/dev/nvme0n3 00:32:19.186 [job3] 00:32:19.186 filename=/dev/nvme0n4 00:32:19.186 Could not set queue depth (nvme0n1) 00:32:19.186 Could not set queue depth (nvme0n2) 00:32:19.186 Could not set queue depth (nvme0n3) 00:32:19.186 Could not set queue depth (nvme0n4) 00:32:19.444 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:19.444 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:19.444 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:19.444 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:19.444 fio-3.35 00:32:19.444 Starting 4 threads 00:32:20.822 00:32:20.822 job0: (groupid=0, jobs=1): err= 0: pid=672635: Wed Nov 20 12:42:03 2024 00:32:20.822 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:32:20.822 slat (nsec): min=1695, max=14139k, avg=107567.17, stdev=876505.06 00:32:20.822 clat (usec): min=4824, max=37456, avg=13668.80, stdev=4770.04 00:32:20.822 lat (usec): min=4835, max=37466, avg=13776.37, stdev=4839.41 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 7177], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10552], 00:32:20.822 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12518], 60.00th=[12911], 00:32:20.822 | 70.00th=[13435], 80.00th=[16188], 90.00th=[20579], 95.00th=[23200], 00:32:20.822 | 99.00th=[32900], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:32:20.822 | 99.99th=[37487] 00:32:20.822 write: IOPS=4943, BW=19.3MiB/s (20.2MB/s)(19.5MiB/1008msec); 0 zone resets 00:32:20.822 slat (usec): min=2, max=16715, avg=95.93, stdev=705.85 00:32:20.822 clat (usec): min=313, max=37461, avg=13010.15, stdev=5433.67 00:32:20.822 lat (usec): min=1436, max=37471, avg=13106.08, stdev=5477.15 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 5342], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 8586], 00:32:20.822 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[12256], 60.00th=[13435], 00:32:20.822 | 70.00th=[14353], 80.00th=[19530], 90.00th=[21890], 95.00th=[22414], 00:32:20.822 | 99.00th=[24511], 99.50th=[25822], 99.90th=[34341], 99.95th=[36963], 00:32:20.822 | 99.99th=[37487] 00:32:20.822 bw ( KiB/s): min=17672, max=21168, per=26.51%, avg=19420.00, stdev=2472.05, samples=2 00:32:20.822 iops : min= 4418, max= 5292, avg=4855.00, stdev=618.01, samples=2 00:32:20.822 lat (usec) : 500=0.01% 00:32:20.822 lat (msec) : 2=0.02%, 4=0.19%, 10=27.20%, 20=57.38%, 50=15.20% 00:32:20.822 cpu : usr=3.87%, sys=6.16%, ctx=285, majf=0, minf=1 00:32:20.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:20.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:20.822 issued rwts: total=4608,4983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:20.822 job1: (groupid=0, jobs=1): err= 0: pid=672636: Wed Nov 20 12:42:03 2024 00:32:20.822 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:32:20.822 slat (nsec): min=1301, max=10346k, avg=93804.80, stdev=700001.88 00:32:20.822 clat (usec): min=2083, max=40438, avg=11949.23, stdev=5399.70 00:32:20.822 lat (usec): min=2092, max=40445, avg=12043.03, stdev=5453.41 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 2900], 5.00th=[ 7046], 10.00th=[ 8356], 20.00th=[ 9110], 00:32:20.822 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:32:20.822 | 70.00th=[11994], 80.00th=[14353], 90.00th=[17171], 95.00th=[25297], 00:32:20.822 | 99.00th=[33817], 99.50th=[35390], 99.90th=[38536], 99.95th=[40633], 00:32:20.822 | 99.99th=[40633] 00:32:20.822 write: IOPS=5525, BW=21.6MiB/s (22.6MB/s)(21.8MiB/1010msec); 0 zone resets 00:32:20.822 slat (usec): min=2, max=8831, avg=85.39, stdev=541.08 00:32:20.822 clat (usec): min=2460, max=57856, avg=11907.95, stdev=7581.87 00:32:20.822 lat (usec): min=2466, max=57864, avg=11993.33, stdev=7633.87 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 4015], 5.00th=[ 5604], 10.00th=[ 6587], 20.00th=[ 8094], 00:32:20.822 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10421], 00:32:20.822 | 70.00th=[10683], 80.00th=[12649], 90.00th=[20579], 95.00th=[23200], 00:32:20.822 | 99.00th=[49021], 99.50th=[55313], 99.90th=[57934], 99.95th=[57934], 00:32:20.822 | 99.99th=[57934] 00:32:20.822 bw ( KiB/s): min=18416, max=25216, per=29.78%, avg=21816.00, stdev=4808.33, samples=2 00:32:20.822 iops : min= 4604, max= 6304, avg=5454.00, stdev=1202.08, samples=2 00:32:20.822 lat (msec) : 4=1.37%, 10=44.15%, 20=46.34%, 50=7.63%, 100=0.51% 00:32:20.822 cpu : usr=4.36%, sys=6.05%, ctx=461, majf=0, minf=1 00:32:20.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:20.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:20.822 issued rwts: total=5120,5581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:20.822 job2: (groupid=0, jobs=1): err= 0: pid=672637: Wed Nov 20 12:42:03 2024 00:32:20.822 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:32:20.822 slat (nsec): min=1390, max=18289k, avg=97519.49, stdev=803481.55 00:32:20.822 clat (usec): min=4654, max=42630, avg=13085.87, stdev=4983.15 00:32:20.822 lat (usec): min=4662, max=42645, avg=13183.39, stdev=5038.52 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 5735], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[10290], 00:32:20.822 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[12387], 00:32:20.822 | 70.00th=[13304], 80.00th=[16909], 90.00th=[19268], 95.00th=[22414], 00:32:20.822 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:32:20.822 | 99.99th=[42730] 00:32:20.822 write: IOPS=4985, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1004msec); 0 zone resets 00:32:20.822 slat (usec): min=2, max=22239, avg=101.65, stdev=786.88 00:32:20.822 clat (usec): min=1817, max=67953, avg=13362.59, stdev=8849.86 00:32:20.822 lat (usec): min=4600, max=67967, avg=13464.24, stdev=8900.26 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 5407], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 9372], 00:32:20.822 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:32:20.822 | 70.00th=[11994], 80.00th=[15008], 90.00th=[19268], 95.00th=[23987], 00:32:20.822 | 99.00th=[66323], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:32:20.822 | 99.99th=[67634] 00:32:20.822 bw ( KiB/s): min=16384, max=22632, per=26.63%, avg=19508.00, stdev=4418.00, samples=2 00:32:20.822 iops : min= 4096, max= 5658, avg=4877.00, stdev=1104.50, samples=2 00:32:20.822 lat (msec) : 2=0.01%, 10=20.76%, 20=69.89%, 50=8.18%, 100=1.15% 00:32:20.822 cpu : usr=4.09%, sys=6.18%, ctx=395, majf=0, minf=1 00:32:20.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:20.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:20.822 issued rwts: total=4608,5005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:20.822 job3: (groupid=0, jobs=1): err= 0: pid=672638: Wed Nov 20 12:42:03 2024 00:32:20.822 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:32:20.822 slat (nsec): min=1775, max=17947k, avg=157162.60, stdev=950223.72 00:32:20.822 clat (usec): min=2996, max=68835, avg=18718.00, stdev=7490.81 00:32:20.822 lat (usec): min=3006, max=68842, avg=18875.16, stdev=7568.97 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 3097], 5.00th=[10814], 10.00th=[12125], 20.00th=[13566], 00:32:20.822 | 30.00th=[14484], 40.00th=[15533], 50.00th=[17957], 60.00th=[20579], 00:32:20.822 | 70.00th=[21890], 80.00th=[22152], 90.00th=[24773], 95.00th=[28181], 00:32:20.822 | 99.00th=[53216], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:32:20.822 | 99.99th=[68682] 00:32:20.822 write: IOPS=2899, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1010msec); 0 zone resets 00:32:20.822 slat (usec): min=2, max=21003, avg=191.28, stdev=1359.28 00:32:20.822 clat (usec): min=209, max=84241, avg=27337.65, stdev=17757.55 00:32:20.822 lat (usec): min=4656, max=84249, avg=27528.93, stdev=17843.90 00:32:20.822 clat percentiles (usec): 00:32:20.822 | 1.00th=[ 7504], 5.00th=[10028], 10.00th=[11338], 20.00th=[12125], 00:32:20.822 | 30.00th=[14615], 40.00th=[19006], 50.00th=[21890], 60.00th=[23462], 00:32:20.823 | 70.00th=[29492], 80.00th=[44827], 90.00th=[54264], 95.00th=[64226], 00:32:20.823 | 99.00th=[81265], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:32:20.823 | 99.99th=[84411] 00:32:20.823 bw ( KiB/s): min= 6016, max=16384, per=15.29%, avg=11200.00, stdev=7331.28, samples=2 00:32:20.823 iops : min= 1504, max= 4096, avg=2800.00, stdev=1832.82, samples=2 00:32:20.823 lat (usec) : 250=0.02% 00:32:20.823 lat (msec) : 4=0.58%, 10=3.39%, 20=46.87%, 50=41.98%, 100=7.16% 00:32:20.823 cpu : usr=1.88%, sys=3.67%, ctx=228, majf=0, minf=1 00:32:20.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:20.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:20.823 issued rwts: total=2560,2928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:20.823 00:32:20.823 Run status group 0 (all jobs): 00:32:20.823 READ: bw=65.3MiB/s (68.5MB/s), 9.90MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=66.0MiB (69.2MB), run=1004-1010msec 00:32:20.823 WRITE: bw=71.5MiB/s (75.0MB/s), 11.3MiB/s-21.6MiB/s (11.9MB/s-22.6MB/s), io=72.3MiB (75.8MB), run=1004-1010msec 00:32:20.823 00:32:20.823 Disk stats (read/write): 00:32:20.823 nvme0n1: ios=4146/4263, merge=0/0, ticks=52556/51239, in_queue=103795, util=86.97% 00:32:20.823 nvme0n2: ios=4146/4608, merge=0/0, ticks=38867/42224, in_queue=81091, util=94.12% 00:32:20.823 nvme0n3: ios=3820/4096, merge=0/0, ticks=50203/55349, in_queue=105552, util=96.57% 00:32:20.823 nvme0n4: ios=2330/2560, merge=0/0, ticks=21263/32365, in_queue=53628, util=98.43% 00:32:20.823 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:20.823 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=672867 00:32:20.823 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:20.823 12:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:20.823 [global] 00:32:20.823 thread=1 00:32:20.823 invalidate=1 00:32:20.823 rw=read 00:32:20.823 time_based=1 00:32:20.823 runtime=10 00:32:20.823 ioengine=libaio 00:32:20.823 direct=1 00:32:20.823 bs=4096 00:32:20.823 iodepth=1 00:32:20.823 norandommap=1 00:32:20.823 numjobs=1 00:32:20.823 00:32:20.823 [job0] 00:32:20.823 filename=/dev/nvme0n1 00:32:20.823 [job1] 00:32:20.823 filename=/dev/nvme0n2 00:32:20.823 [job2] 00:32:20.823 filename=/dev/nvme0n3 00:32:20.823 [job3] 00:32:20.823 filename=/dev/nvme0n4 00:32:20.823 Could not set queue depth (nvme0n1) 00:32:20.823 Could not set queue depth (nvme0n2) 00:32:20.823 Could not set queue depth (nvme0n3) 00:32:20.823 Could not set queue depth (nvme0n4) 00:32:20.823 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:20.823 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:20.823 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:20.823 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:20.823 fio-3.35 00:32:20.823 Starting 4 threads 00:32:24.112 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:24.112 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41787392, buflen=4096 00:32:24.112 fio: pid=673018, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.112 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:24.112 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.112 12:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:24.112 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8396800, buflen=4096 00:32:24.112 fio: pid=673017, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.112 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54284288, buflen=4096 00:32:24.112 fio: pid=673009, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.112 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.112 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:24.372 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.372 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:24.372 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9191424, buflen=4096 00:32:24.372 fio: pid=673014, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:24.372 00:32:24.372 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=673009: Wed Nov 20 12:42:07 2024 00:32:24.372 read: IOPS=4226, BW=16.5MiB/s (17.3MB/s)(51.8MiB/3136msec) 00:32:24.372 slat (usec): min=6, max=14622, avg=11.57, stdev=171.18 00:32:24.372 clat (usec): min=175, max=1699, avg=221.32, stdev=21.47 00:32:24.372 lat (usec): min=184, max=14978, avg=232.90, stdev=174.47 00:32:24.372 clat percentiles (usec): 00:32:24.372 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 208], 00:32:24.372 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:32:24.372 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 247], 00:32:24.372 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 318], 99.95th=[ 408], 00:32:24.372 | 99.99th=[ 996] 00:32:24.372 bw ( KiB/s): min=16841, max=17580, per=51.89%, avg=17044.83, stdev=284.91, samples=6 00:32:24.372 iops : min= 4210, max= 4395, avg=4261.17, stdev=71.26, samples=6 00:32:24.372 lat (usec) : 250=96.24%, 500=3.73%, 1000=0.01% 00:32:24.372 lat (msec) : 2=0.01% 00:32:24.372 cpu : usr=2.36%, sys=7.37%, ctx=13257, majf=0, minf=1 00:32:24.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 issued rwts: total=13254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.372 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=673014: Wed Nov 20 12:42:07 2024 00:32:24.372 read: IOPS=664, BW=2656KiB/s (2720kB/s)(8976KiB/3379msec) 00:32:24.372 slat (usec): min=6, max=15473, avg=31.10, stdev=528.58 00:32:24.372 clat (usec): min=189, max=41208, avg=1464.05, stdev=6986.38 00:32:24.372 lat (usec): min=196, max=45049, avg=1495.16, stdev=7015.43 00:32:24.372 clat percentiles (usec): 00:32:24.372 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:32:24.372 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:32:24.372 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 265], 00:32:24.372 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:24.372 | 99.99th=[41157] 00:32:24.372 bw ( KiB/s): min= 96, max=12480, per=6.60%, avg=2169.17, stdev=5051.29, samples=6 00:32:24.372 iops : min= 24, max= 3120, avg=542.17, stdev=1262.88, samples=6 00:32:24.372 lat (usec) : 250=87.39%, 500=9.31%, 750=0.18%, 1000=0.04% 00:32:24.372 lat (msec) : 50=3.03% 00:32:24.372 cpu : usr=0.06%, sys=0.77%, ctx=2251, majf=0, minf=2 00:32:24.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.372 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=673017: Wed Nov 20 12:42:07 2024 00:32:24.372 read: IOPS=694, BW=2776KiB/s (2843kB/s)(8200KiB/2954msec) 00:32:24.372 slat (nsec): min=6734, max=71151, avg=9505.61, stdev=2836.43 00:32:24.372 clat (usec): min=211, max=41439, avg=1418.87, stdev=6798.57 00:32:24.372 lat (usec): min=220, max=41447, avg=1428.36, stdev=6799.34 00:32:24.372 clat percentiles (usec): 00:32:24.372 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:32:24.372 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:32:24.372 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 289], 00:32:24.372 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:24.372 | 99.99th=[41681] 00:32:24.372 bw ( KiB/s): min= 183, max= 6736, per=9.87%, avg=3243.00, stdev=2929.62, samples=5 00:32:24.372 iops : min= 45, max= 1684, avg=810.60, stdev=732.60, samples=5 00:32:24.372 lat (usec) : 250=66.36%, 500=30.67% 00:32:24.372 lat (msec) : 2=0.05%, 50=2.88% 00:32:24.372 cpu : usr=0.30%, sys=0.78%, ctx=2052, majf=0, minf=2 00:32:24.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.372 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=673018: Wed Nov 20 12:42:07 2024 00:32:24.372 read: IOPS=3773, BW=14.7MiB/s (15.5MB/s)(39.9MiB/2704msec) 00:32:24.372 slat (nsec): min=6323, max=40269, avg=7736.13, stdev=1021.22 00:32:24.372 clat (usec): min=205, max=2854, avg=255.28, stdev=33.94 00:32:24.372 lat (usec): min=212, max=2861, avg=263.01, stdev=33.97 00:32:24.372 clat percentiles (usec): 00:32:24.372 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:32:24.372 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:32:24.372 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:32:24.372 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 506], 99.95th=[ 562], 00:32:24.372 | 99.99th=[ 758] 00:32:24.372 bw ( KiB/s): min=14499, max=15520, per=46.21%, avg=15178.20, stdev=422.77, samples=5 00:32:24.372 iops : min= 3624, max= 3880, avg=3794.40, stdev=105.99, samples=5 00:32:24.372 lat (usec) : 250=43.68%, 500=56.17%, 750=0.12%, 1000=0.01% 00:32:24.372 lat (msec) : 4=0.01% 00:32:24.372 cpu : usr=1.07%, sys=3.48%, ctx=10205, majf=0, minf=2 00:32:24.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.372 issued rwts: total=10203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.372 00:32:24.372 Run status group 0 (all jobs): 00:32:24.372 READ: bw=32.1MiB/s (33.6MB/s), 2656KiB/s-16.5MiB/s (2720kB/s-17.3MB/s), io=108MiB (114MB), run=2704-3379msec 00:32:24.372 00:32:24.372 Disk stats (read/write): 00:32:24.372 nvme0n1: ios=13204/0, merge=0/0, ticks=2766/0, in_queue=2766, util=94.82% 00:32:24.372 nvme0n2: ios=2243/0, merge=0/0, ticks=3241/0, in_queue=3241, util=94.92% 00:32:24.372 nvme0n3: ios=2046/0, merge=0/0, ticks=2801/0, in_queue=2801, util=96.52% 00:32:24.372 nvme0n4: ios=9911/0, merge=0/0, ticks=3499/0, in_queue=3499, util=99.41% 00:32:24.631 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.631 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:24.891 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:24.891 12:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:25.150 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:25.150 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:25.150 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:25.150 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:25.410 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:25.410 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 672867 00:32:25.410 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:25.410 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:25.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:25.669 nvmf hotplug test: fio failed as expected 00:32:25.669 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.929 rmmod nvme_tcp 00:32:25.929 rmmod nvme_fabrics 00:32:25.929 rmmod nvme_keyring 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 670133 ']' 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 670133 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 670133 ']' 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 670133 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 670133 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 670133' 00:32:25.929 killing process with pid 670133 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 670133 00:32:25.929 12:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 670133 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.189 12:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.096 00:32:28.096 real 0m26.034s 00:32:28.096 user 1m30.692s 00:32:28.096 sys 0m11.994s 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:28.096 ************************************ 00:32:28.096 END TEST nvmf_fio_target 00:32:28.096 ************************************ 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.096 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:28.356 ************************************ 00:32:28.356 START TEST nvmf_bdevio 00:32:28.356 ************************************ 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:28.356 * Looking for test storage... 00:32:28.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:28.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.356 --rc genhtml_branch_coverage=1 00:32:28.356 --rc genhtml_function_coverage=1 00:32:28.356 --rc genhtml_legend=1 00:32:28.356 --rc geninfo_all_blocks=1 00:32:28.356 --rc geninfo_unexecuted_blocks=1 00:32:28.356 00:32:28.356 ' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:28.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.356 --rc genhtml_branch_coverage=1 00:32:28.356 --rc genhtml_function_coverage=1 00:32:28.356 --rc genhtml_legend=1 00:32:28.356 --rc geninfo_all_blocks=1 00:32:28.356 --rc geninfo_unexecuted_blocks=1 00:32:28.356 00:32:28.356 ' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:28.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.356 --rc genhtml_branch_coverage=1 00:32:28.356 --rc genhtml_function_coverage=1 00:32:28.356 --rc genhtml_legend=1 00:32:28.356 --rc geninfo_all_blocks=1 00:32:28.356 --rc geninfo_unexecuted_blocks=1 00:32:28.356 00:32:28.356 ' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:28.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.356 --rc genhtml_branch_coverage=1 00:32:28.356 --rc genhtml_function_coverage=1 00:32:28.356 --rc genhtml_legend=1 00:32:28.356 --rc geninfo_all_blocks=1 00:32:28.356 --rc geninfo_unexecuted_blocks=1 00:32:28.356 00:32:28.356 ' 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.356 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:28.357 12:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:34.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:34.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.926 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:34.927 Found net devices under 0000:86:00.0: cvl_0_0 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:34.927 Found net devices under 0000:86:00.1: cvl_0_1 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:32:34.927 00:32:34.927 --- 10.0.0.2 ping statistics --- 00:32:34.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.927 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:32:34.927 00:32:34.927 --- 10.0.0.1 ping statistics --- 00:32:34.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.927 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=677695 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 677695 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 677695 ']' 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.927 [2024-11-20 12:42:17.379831] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.927 [2024-11-20 12:42:17.380839] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:32:34.927 [2024-11-20 12:42:17.380876] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.927 [2024-11-20 12:42:17.458875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.927 [2024-11-20 12:42:17.501127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.927 [2024-11-20 12:42:17.501166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.927 [2024-11-20 12:42:17.501173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.927 [2024-11-20 12:42:17.501180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.927 [2024-11-20 12:42:17.501185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.927 [2024-11-20 12:42:17.502832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:34.927 [2024-11-20 12:42:17.502945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:34.927 [2024-11-20 12:42:17.503054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.927 [2024-11-20 12:42:17.503055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:34.927 [2024-11-20 12:42:17.571421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.927 [2024-11-20 12:42:17.572107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.927 [2024-11-20 12:42:17.572390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:34.927 [2024-11-20 12:42:17.572749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:34.927 [2024-11-20 12:42:17.572799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:34.927 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.928 [2024-11-20 12:42:17.639873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.928 Malloc0 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.928 [2024-11-20 12:42:17.715941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:34.928 { 00:32:34.928 "params": { 00:32:34.928 "name": "Nvme$subsystem", 00:32:34.928 "trtype": "$TEST_TRANSPORT", 00:32:34.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.928 "adrfam": "ipv4", 00:32:34.928 "trsvcid": "$NVMF_PORT", 00:32:34.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.928 "hdgst": ${hdgst:-false}, 00:32:34.928 "ddgst": ${ddgst:-false} 00:32:34.928 }, 00:32:34.928 "method": "bdev_nvme_attach_controller" 00:32:34.928 } 00:32:34.928 EOF 00:32:34.928 )") 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:34.928 12:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:34.928 "params": { 00:32:34.928 "name": "Nvme1", 00:32:34.928 "trtype": "tcp", 00:32:34.928 "traddr": "10.0.0.2", 00:32:34.928 "adrfam": "ipv4", 00:32:34.928 "trsvcid": "4420", 00:32:34.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:34.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:34.928 "hdgst": false, 00:32:34.928 "ddgst": false 00:32:34.928 }, 00:32:34.928 "method": "bdev_nvme_attach_controller" 00:32:34.928 }' 00:32:34.928 [2024-11-20 12:42:17.768359] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:32:34.928 [2024-11-20 12:42:17.768404] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677884 ] 00:32:34.928 [2024-11-20 12:42:17.843296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:34.928 [2024-11-20 12:42:17.887053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.928 [2024-11-20 12:42:17.887161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.928 [2024-11-20 12:42:17.887161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.187 I/O targets: 00:32:35.187 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:35.187 00:32:35.187 00:32:35.187 CUnit - A unit testing framework for C - Version 2.1-3 00:32:35.187 http://cunit.sourceforge.net/ 00:32:35.187 00:32:35.187 00:32:35.187 Suite: bdevio tests on: Nvme1n1 00:32:35.187 Test: blockdev write read block ...passed 00:32:35.445 Test: blockdev write zeroes read block ...passed 00:32:35.446 Test: blockdev write zeroes read no split ...passed 00:32:35.446 Test: blockdev write zeroes read split ...passed 00:32:35.446 Test: blockdev write zeroes read split partial ...passed 00:32:35.446 Test: blockdev reset ...[2024-11-20 12:42:18.427513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:35.446 [2024-11-20 12:42:18.427578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1721340 (9): Bad file descriptor 00:32:35.446 [2024-11-20 12:42:18.471907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:35.446 passed 00:32:35.446 Test: blockdev write read 8 blocks ...passed 00:32:35.446 Test: blockdev write read size > 128k ...passed 00:32:35.446 Test: blockdev write read invalid size ...passed 00:32:35.446 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:35.446 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:35.446 Test: blockdev write read max offset ...passed 00:32:35.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:35.705 Test: blockdev writev readv 8 blocks ...passed 00:32:35.705 Test: blockdev writev readv 30 x 1block ...passed 00:32:35.705 Test: blockdev writev readv block ...passed 00:32:35.705 Test: blockdev writev readv size > 128k ...passed 00:32:35.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:35.705 Test: blockdev comparev and writev ...[2024-11-20 12:42:18.723864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.723893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.723907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.723915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.724215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.724226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.724242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.724249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.724535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.724546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.724558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.724566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.724854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.724865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.724877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:35.705 [2024-11-20 12:42:18.724885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:35.705 passed 00:32:35.705 Test: blockdev nvme passthru rw ...passed 00:32:35.705 Test: blockdev nvme passthru vendor specific ...[2024-11-20 12:42:18.807330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.705 [2024-11-20 12:42:18.807351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.807462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.705 [2024-11-20 12:42:18.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.807576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.705 [2024-11-20 12:42:18.807586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:35.705 [2024-11-20 12:42:18.807694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:35.705 [2024-11-20 12:42:18.807703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:35.705 passed 00:32:35.705 Test: blockdev nvme admin passthru ...passed 00:32:35.964 Test: blockdev copy ...passed 00:32:35.964 00:32:35.964 Run Summary: Type Total Ran Passed Failed Inactive 00:32:35.964 suites 1 1 n/a 0 0 00:32:35.964 tests 23 23 23 0 0 00:32:35.964 asserts 152 152 152 0 n/a 00:32:35.964 00:32:35.964 Elapsed time = 1.248 seconds 00:32:35.964 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.964 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.964 12:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:35.964 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:35.965 rmmod nvme_tcp 00:32:35.965 rmmod nvme_fabrics 00:32:35.965 rmmod nvme_keyring 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 677695 ']' 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 677695 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 677695 ']' 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 677695 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:35.965 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 677695 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 677695' 00:32:36.236 killing process with pid 677695 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 677695 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 677695 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.236 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.237 12:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.776 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:38.776 00:32:38.776 real 0m10.163s 00:32:38.776 user 0m10.134s 00:32:38.776 sys 0m5.286s 00:32:38.776 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.776 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:38.776 ************************************ 00:32:38.776 END TEST nvmf_bdevio 00:32:38.776 ************************************ 00:32:38.776 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:38.776 00:32:38.776 real 4m34.056s 00:32:38.776 user 9m7.200s 00:32:38.776 sys 1m52.170s 00:32:38.776 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.776 12:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:38.776 ************************************ 00:32:38.776 END TEST nvmf_target_core_interrupt_mode 00:32:38.776 ************************************ 00:32:38.776 12:42:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:38.776 12:42:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:38.776 12:42:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.776 12:42:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.776 ************************************ 00:32:38.776 START TEST nvmf_interrupt 00:32:38.776 ************************************ 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:38.776 * Looking for test storage... 00:32:38.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:38.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.776 --rc genhtml_branch_coverage=1 00:32:38.776 --rc genhtml_function_coverage=1 00:32:38.776 --rc genhtml_legend=1 00:32:38.776 --rc geninfo_all_blocks=1 00:32:38.776 --rc geninfo_unexecuted_blocks=1 00:32:38.776 00:32:38.776 ' 00:32:38.776 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:38.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.776 --rc genhtml_branch_coverage=1 00:32:38.776 --rc genhtml_function_coverage=1 00:32:38.776 --rc genhtml_legend=1 00:32:38.776 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:38.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.777 --rc genhtml_branch_coverage=1 00:32:38.777 --rc genhtml_function_coverage=1 00:32:38.777 --rc genhtml_legend=1 00:32:38.777 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:38.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:38.777 --rc genhtml_branch_coverage=1 00:32:38.777 --rc genhtml_function_coverage=1 00:32:38.777 --rc genhtml_legend=1 00:32:38.777 --rc geninfo_all_blocks=1 00:32:38.777 --rc geninfo_unexecuted_blocks=1 00:32:38.777 00:32:38.777 ' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:38.777 12:42:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:45.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:45.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:45.346 Found net devices under 0000:86:00.0: cvl_0_0 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:45.346 Found net devices under 0000:86:00.1: cvl_0_1 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.346 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:32:45.347 00:32:45.347 --- 10.0.0.2 ping statistics --- 00:32:45.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.347 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:32:45.347 00:32:45.347 --- 10.0.0.1 ping statistics --- 00:32:45.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.347 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=681508 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 681508 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 681508 ']' 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 [2024-11-20 12:42:27.618432] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.347 [2024-11-20 12:42:27.619410] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:32:45.347 [2024-11-20 12:42:27.619444] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.347 [2024-11-20 12:42:27.700805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:45.347 [2024-11-20 12:42:27.742104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.347 [2024-11-20 12:42:27.742141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.347 [2024-11-20 12:42:27.742151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.347 [2024-11-20 12:42:27.742157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.347 [2024-11-20 12:42:27.742162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.347 [2024-11-20 12:42:27.743337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.347 [2024-11-20 12:42:27.743339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.347 [2024-11-20 12:42:27.810311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.347 [2024-11-20 12:42:27.810864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.347 [2024-11-20 12:42:27.811086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:45.347 5000+0 records in 00:32:45.347 5000+0 records out 00:32:45.347 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0173107 s, 592 MB/s 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 AIO0 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 [2024-11-20 12:42:27.936134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:45.347 [2024-11-20 12:42:27.976447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 681508 0 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 681508 0 idle 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:45.347 12:42:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681508 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681508 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 681508 1 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 681508 1 idle 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:45.347 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681555 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681555 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=681690 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 681508 0 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 681508 0 busy 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:45.348 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681508 root 20 0 128.2g 46848 33792 R 60.0 0.0 0:00.35 reactor_0' 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681508 root 20 0 128.2g 46848 33792 R 60.0 0.0 0:00.35 reactor_0 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 681508 1 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 681508 1 busy 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681555 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1' 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681555 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:45.608 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:45.867 12:42:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 681690 00:32:55.847 Initializing NVMe Controllers 00:32:55.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:55.847 Controller IO queue size 256, less than required. 00:32:55.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:55.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:55.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:55.847 Initialization complete. Launching workers. 00:32:55.847 ======================================================== 00:32:55.847 Latency(us) 00:32:55.847 Device Information : IOPS MiB/s Average min max 00:32:55.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15965.79 62.37 16041.52 3436.79 31825.85 00:32:55.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16064.99 62.75 15939.81 7483.78 28651.54 00:32:55.847 ======================================================== 00:32:55.847 Total : 32030.78 125.12 15990.50 3436.79 31825.85 00:32:55.847 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 681508 0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 681508 0 idle 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681508 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681508 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 681508 1 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 681508 1 idle 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681555 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681555 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:55.847 12:42:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:56.415 12:42:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:56.415 12:42:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:56.415 12:42:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:56.415 12:42:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:56.415 12:42:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 681508 0 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 681508 0 idle 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:58.319 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:58.320 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:58.320 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:58.320 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681508 root 20 0 128.2g 72960 33792 S 6.7 0.0 0:20.48 reactor_0' 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681508 root 20 0 128.2g 72960 33792 S 6.7 0.0 0:20.48 reactor_0 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 681508 1 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 681508 1 idle 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=681508 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 681508 -w 256 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 681555 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 681555 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:58.579 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:58.837 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:58.837 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:58.837 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:58.837 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:58.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:58.838 rmmod nvme_tcp 00:32:58.838 rmmod nvme_fabrics 00:32:58.838 rmmod nvme_keyring 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 681508 ']' 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 681508 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 681508 ']' 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 681508 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.838 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 681508 00:32:59.097 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:59.097 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:59.097 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 681508' 00:32:59.097 killing process with pid 681508 00:32:59.097 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 681508 00:32:59.097 12:42:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 681508 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.097 12:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.634 12:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.634 00:33:01.634 real 0m22.761s 00:33:01.634 user 0m39.603s 00:33:01.634 sys 0m8.374s 00:33:01.634 12:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.634 12:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:01.634 ************************************ 00:33:01.634 END TEST nvmf_interrupt 00:33:01.634 ************************************ 00:33:01.634 00:33:01.634 real 27m27.369s 00:33:01.634 user 56m23.920s 00:33:01.634 sys 9m20.907s 00:33:01.634 12:42:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.634 12:42:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.634 ************************************ 00:33:01.634 END TEST nvmf_tcp 00:33:01.634 ************************************ 00:33:01.634 12:42:44 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:01.634 12:42:44 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:01.634 12:42:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:01.634 12:42:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.634 12:42:44 -- common/autotest_common.sh@10 -- # set +x 00:33:01.634 ************************************ 00:33:01.634 START TEST spdkcli_nvmf_tcp 00:33:01.634 ************************************ 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:01.634 * Looking for test storage... 00:33:01.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:01.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.634 --rc genhtml_branch_coverage=1 00:33:01.634 --rc genhtml_function_coverage=1 00:33:01.634 --rc genhtml_legend=1 00:33:01.634 --rc geninfo_all_blocks=1 00:33:01.634 --rc geninfo_unexecuted_blocks=1 00:33:01.634 00:33:01.634 ' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:01.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.634 --rc genhtml_branch_coverage=1 00:33:01.634 --rc genhtml_function_coverage=1 00:33:01.634 --rc genhtml_legend=1 00:33:01.634 --rc geninfo_all_blocks=1 00:33:01.634 --rc geninfo_unexecuted_blocks=1 00:33:01.634 00:33:01.634 ' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:01.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.634 --rc genhtml_branch_coverage=1 00:33:01.634 --rc genhtml_function_coverage=1 00:33:01.634 --rc genhtml_legend=1 00:33:01.634 --rc geninfo_all_blocks=1 00:33:01.634 --rc geninfo_unexecuted_blocks=1 00:33:01.634 00:33:01.634 ' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:01.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.634 --rc genhtml_branch_coverage=1 00:33:01.634 --rc genhtml_function_coverage=1 00:33:01.634 --rc genhtml_legend=1 00:33:01.634 --rc geninfo_all_blocks=1 00:33:01.634 --rc geninfo_unexecuted_blocks=1 00:33:01.634 00:33:01.634 ' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.634 12:42:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=684372 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 684372 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 684372 ']' 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.635 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.635 [2024-11-20 12:42:44.623549] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:33:01.635 [2024-11-20 12:42:44.623596] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684372 ] 00:33:01.635 [2024-11-20 12:42:44.698751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:01.635 [2024-11-20 12:42:44.740379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.635 [2024-11-20 12:42:44.740380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.893 12:42:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:01.893 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:01.893 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:01.893 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:01.893 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:01.893 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:01.893 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:01.893 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:01.893 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:01.893 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:01.893 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:01.893 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:01.893 ' 00:33:05.178 [2024-11-20 12:42:47.582584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.115 [2024-11-20 12:42:48.919097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:08.647 [2024-11-20 12:42:51.394719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:10.550 [2024-11-20 12:42:53.569433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:12.455 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:12.455 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:12.455 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:12.455 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:12.455 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:12.455 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:12.455 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:12.455 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:12.455 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:12.455 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:12.455 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:12.455 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:12.455 12:42:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:12.714 12:42:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:12.714 12:42:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:12.714 12:42:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:12.715 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.715 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:12.973 12:42:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:12.973 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.973 12:42:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:12.973 12:42:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:12.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:12.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:12.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:12.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:12.973 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:12.973 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:12.973 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:12.973 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:12.973 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:12.973 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:12.973 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:12.973 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:12.973 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:12.973 ' 00:33:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:18.244 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:18.245 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:18.245 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:18.245 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.245 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:18.245 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:18.245 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:18.245 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:18.245 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:18.245 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 684372 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 684372 ']' 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 684372 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684372 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684372' 00:33:18.505 killing process with pid 684372 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 684372 00:33:18.505 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 684372 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 684372 ']' 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 684372 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 684372 ']' 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 684372 00:33:18.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (684372) - No such process 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 684372 is not found' 00:33:18.765 Process with pid 684372 is not found 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:18.765 00:33:18.765 real 0m17.341s 00:33:18.765 user 0m38.240s 00:33:18.765 sys 0m0.796s 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.765 12:43:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.765 ************************************ 00:33:18.765 END TEST spdkcli_nvmf_tcp 00:33:18.765 ************************************ 00:33:18.765 12:43:01 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:18.765 12:43:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:18.765 12:43:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.765 12:43:01 -- common/autotest_common.sh@10 -- # set +x 00:33:18.765 ************************************ 00:33:18.765 START TEST nvmf_identify_passthru 00:33:18.765 ************************************ 00:33:18.765 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:18.765 * Looking for test storage... 00:33:18.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:18.765 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:18.765 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:18.765 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:19.025 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.025 12:43:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:19.025 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.025 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:19.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.026 --rc genhtml_branch_coverage=1 00:33:19.026 --rc genhtml_function_coverage=1 00:33:19.026 --rc genhtml_legend=1 00:33:19.026 --rc geninfo_all_blocks=1 00:33:19.026 --rc geninfo_unexecuted_blocks=1 00:33:19.026 00:33:19.026 ' 00:33:19.026 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:19.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.026 --rc genhtml_branch_coverage=1 00:33:19.026 --rc genhtml_function_coverage=1 00:33:19.026 --rc genhtml_legend=1 00:33:19.026 --rc geninfo_all_blocks=1 00:33:19.026 --rc geninfo_unexecuted_blocks=1 00:33:19.026 00:33:19.026 ' 00:33:19.026 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:19.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.026 --rc genhtml_branch_coverage=1 00:33:19.026 --rc genhtml_function_coverage=1 00:33:19.026 --rc genhtml_legend=1 00:33:19.026 --rc geninfo_all_blocks=1 00:33:19.026 --rc geninfo_unexecuted_blocks=1 00:33:19.026 00:33:19.026 ' 00:33:19.026 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:19.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.026 --rc genhtml_branch_coverage=1 00:33:19.026 --rc genhtml_function_coverage=1 00:33:19.026 --rc genhtml_legend=1 00:33:19.026 --rc geninfo_all_blocks=1 00:33:19.026 --rc geninfo_unexecuted_blocks=1 00:33:19.026 00:33:19.026 ' 00:33:19.026 12:43:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:19.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.026 12:43:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.026 12:43:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:19.026 12:43:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.026 12:43:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.026 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:19.026 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.026 12:43:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.026 12:43:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.671 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:25.672 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:25.672 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:25.672 Found net devices under 0000:86:00.0: cvl_0_0 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:25.672 Found net devices under 0000:86:00.1: cvl_0_1 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:33:25.672 00:33:25.672 --- 10.0.0.2 ping statistics --- 00:33:25.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.672 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:33:25.672 00:33:25.672 --- 10.0.0.1 ping statistics --- 00:33:25.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.672 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.672 12:43:07 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:25.672 12:43:07 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:25.672 12:43:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:29.048 12:43:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:29.048 12:43:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:29.048 12:43:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:29.048 12:43:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=691630 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.239 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 691630 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 691630 ']' 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.239 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.499 [2024-11-20 12:43:16.382680] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:33:33.499 [2024-11-20 12:43:16.382730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.499 [2024-11-20 12:43:16.461126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:33.499 [2024-11-20 12:43:16.504200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.499 [2024-11-20 12:43:16.504240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.499 [2024-11-20 12:43:16.504247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.499 [2024-11-20 12:43:16.504254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.499 [2024-11-20 12:43:16.504259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.499 [2024-11-20 12:43:16.505840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.499 [2024-11-20 12:43:16.505975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.499 [2024-11-20 12:43:16.506088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.499 [2024-11-20 12:43:16.506088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:33.499 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.499 INFO: Log level set to 20 00:33:33.499 INFO: Requests: 00:33:33.499 { 00:33:33.499 "jsonrpc": "2.0", 00:33:33.499 "method": "nvmf_set_config", 00:33:33.499 "id": 1, 00:33:33.499 "params": { 00:33:33.499 "admin_cmd_passthru": { 00:33:33.499 "identify_ctrlr": true 00:33:33.499 } 00:33:33.499 } 00:33:33.499 } 00:33:33.499 00:33:33.499 INFO: response: 00:33:33.499 { 00:33:33.499 "jsonrpc": "2.0", 00:33:33.499 "id": 1, 00:33:33.499 "result": true 00:33:33.499 } 00:33:33.499 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.499 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.499 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.499 INFO: Setting log level to 20 00:33:33.499 INFO: Setting log level to 20 00:33:33.499 INFO: Log level set to 20 00:33:33.499 INFO: Log level set to 20 00:33:33.499 INFO: Requests: 00:33:33.499 { 00:33:33.499 "jsonrpc": "2.0", 00:33:33.499 "method": "framework_start_init", 00:33:33.499 "id": 1 00:33:33.499 } 00:33:33.499 00:33:33.499 INFO: Requests: 00:33:33.499 { 00:33:33.499 "jsonrpc": "2.0", 00:33:33.499 "method": "framework_start_init", 00:33:33.499 "id": 1 00:33:33.499 } 00:33:33.499 00:33:33.499 [2024-11-20 12:43:16.610344] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:33.759 INFO: response: 00:33:33.759 { 00:33:33.759 "jsonrpc": "2.0", 00:33:33.759 "id": 1, 00:33:33.759 "result": true 00:33:33.759 } 00:33:33.759 00:33:33.759 INFO: response: 00:33:33.759 { 00:33:33.759 "jsonrpc": "2.0", 00:33:33.759 "id": 1, 00:33:33.759 "result": true 00:33:33.759 } 00:33:33.759 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.759 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.759 INFO: Setting log level to 40 00:33:33.759 INFO: Setting log level to 40 00:33:33.759 INFO: Setting log level to 40 00:33:33.759 [2024-11-20 12:43:16.623698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.759 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.759 12:43:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.759 12:43:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.047 Nvme0n1 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.047 [2024-11-20 12:43:19.535594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.047 [ 00:33:37.047 { 00:33:37.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:37.047 "subtype": "Discovery", 00:33:37.047 "listen_addresses": [], 00:33:37.047 "allow_any_host": true, 00:33:37.047 "hosts": [] 00:33:37.047 }, 00:33:37.047 { 00:33:37.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.047 "subtype": "NVMe", 00:33:37.047 "listen_addresses": [ 00:33:37.047 { 00:33:37.047 "trtype": "TCP", 00:33:37.047 "adrfam": "IPv4", 00:33:37.047 "traddr": "10.0.0.2", 00:33:37.047 "trsvcid": "4420" 00:33:37.047 } 00:33:37.047 ], 00:33:37.047 "allow_any_host": true, 00:33:37.047 "hosts": [], 00:33:37.047 "serial_number": "SPDK00000000000001", 00:33:37.047 "model_number": "SPDK bdev Controller", 00:33:37.047 "max_namespaces": 1, 00:33:37.047 "min_cntlid": 1, 00:33:37.047 "max_cntlid": 65519, 00:33:37.047 "namespaces": [ 00:33:37.047 { 00:33:37.047 "nsid": 1, 00:33:37.047 "bdev_name": "Nvme0n1", 00:33:37.047 "name": "Nvme0n1", 00:33:37.047 "nguid": "A5EA99056AF242BFBBFFFB0CDEFEE399", 00:33:37.047 "uuid": "a5ea9905-6af2-42bf-bbff-fb0cdefee399" 00:33:37.047 } 00:33:37.047 ] 00:33:37.047 } 00:33:37.047 ] 00:33:37.047 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:37.047 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:37.048 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:37.048 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:37.048 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.048 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:37.048 12:43:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.048 rmmod nvme_tcp 00:33:37.048 rmmod nvme_fabrics 00:33:37.048 rmmod nvme_keyring 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 691630 ']' 00:33:37.048 12:43:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 691630 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 691630 ']' 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 691630 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.048 12:43:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691630 00:33:37.048 12:43:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:37.048 12:43:20 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:37.048 12:43:20 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691630' 00:33:37.048 killing process with pid 691630 00:33:37.048 12:43:20 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 691630 00:33:37.048 12:43:20 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 691630 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.426 12:43:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.426 12:43:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.426 12:43:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.964 12:43:23 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.964 00:33:40.964 real 0m21.795s 00:33:40.964 user 0m26.617s 00:33:40.964 sys 0m6.212s 00:33:40.964 12:43:23 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.965 12:43:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.965 ************************************ 00:33:40.965 END TEST nvmf_identify_passthru 00:33:40.965 ************************************ 00:33:40.965 12:43:23 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:40.965 12:43:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.965 12:43:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.965 12:43:23 -- common/autotest_common.sh@10 -- # set +x 00:33:40.965 ************************************ 00:33:40.965 START TEST nvmf_dif 00:33:40.965 ************************************ 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:40.965 * Looking for test storage... 00:33:40.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.965 --rc genhtml_branch_coverage=1 00:33:40.965 --rc genhtml_function_coverage=1 00:33:40.965 --rc genhtml_legend=1 00:33:40.965 --rc geninfo_all_blocks=1 00:33:40.965 --rc geninfo_unexecuted_blocks=1 00:33:40.965 00:33:40.965 ' 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.965 --rc genhtml_branch_coverage=1 00:33:40.965 --rc genhtml_function_coverage=1 00:33:40.965 --rc genhtml_legend=1 00:33:40.965 --rc geninfo_all_blocks=1 00:33:40.965 --rc geninfo_unexecuted_blocks=1 00:33:40.965 00:33:40.965 ' 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.965 --rc genhtml_branch_coverage=1 00:33:40.965 --rc genhtml_function_coverage=1 00:33:40.965 --rc genhtml_legend=1 00:33:40.965 --rc geninfo_all_blocks=1 00:33:40.965 --rc geninfo_unexecuted_blocks=1 00:33:40.965 00:33:40.965 ' 00:33:40.965 12:43:23 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.965 --rc genhtml_branch_coverage=1 00:33:40.965 --rc genhtml_function_coverage=1 00:33:40.965 --rc genhtml_legend=1 00:33:40.965 --rc geninfo_all_blocks=1 00:33:40.965 --rc geninfo_unexecuted_blocks=1 00:33:40.965 00:33:40.965 ' 00:33:40.965 12:43:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.965 12:43:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.965 12:43:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.965 12:43:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.965 12:43:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.965 12:43:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.965 12:43:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:40.966 12:43:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:40.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.966 12:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:40.966 12:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:40.966 12:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:40.966 12:43:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:40.966 12:43:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.966 12:43:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:40.966 12:43:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:40.966 12:43:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.966 12:43:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:47.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:47.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:47.532 Found net devices under 0000:86:00.0: cvl_0_0 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:47.532 Found net devices under 0000:86:00.1: cvl_0_1 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:33:47.532 00:33:47.532 --- 10.0.0.2 ping statistics --- 00:33:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.532 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:33:47.532 00:33:47.532 --- 10.0.0.1 ping statistics --- 00:33:47.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.532 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:47.532 12:43:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:49.439 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:49.439 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:49.439 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:49.698 12:43:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.698 12:43:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.698 12:43:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.698 12:43:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.698 12:43:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.698 12:43:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.698 12:43:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:49.699 12:43:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:49.699 12:43:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.699 12:43:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=697105 00:33:49.699 12:43:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 697105 00:33:49.699 12:43:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 697105 ']' 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.699 12:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.699 [2024-11-20 12:43:32.679033] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:33:49.699 [2024-11-20 12:43:32.679079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.699 [2024-11-20 12:43:32.757238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.699 [2024-11-20 12:43:32.799671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.699 [2024-11-20 12:43:32.799707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.699 [2024-11-20 12:43:32.799714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.699 [2024-11-20 12:43:32.799721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.699 [2024-11-20 12:43:32.799727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.699 [2024-11-20 12:43:32.800312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.958 12:43:32 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:49.959 12:43:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 12:43:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.959 12:43:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:49.959 12:43:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 [2024-11-20 12:43:32.947883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.959 12:43:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.959 12:43:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 ************************************ 00:33:49.959 START TEST fio_dif_1_default 00:33:49.959 ************************************ 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.959 12:43:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 bdev_null0 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.959 [2024-11-20 12:43:33.024237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.959 { 00:33:49.959 "params": { 00:33:49.959 "name": "Nvme$subsystem", 00:33:49.959 "trtype": "$TEST_TRANSPORT", 00:33:49.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.959 "adrfam": "ipv4", 00:33:49.959 "trsvcid": "$NVMF_PORT", 00:33:49.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.959 "hdgst": ${hdgst:-false}, 00:33:49.959 "ddgst": ${ddgst:-false} 00:33:49.959 }, 00:33:49.959 "method": "bdev_nvme_attach_controller" 00:33:49.959 } 00:33:49.959 EOF 00:33:49.959 )") 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:49.959 "params": { 00:33:49.959 "name": "Nvme0", 00:33:49.959 "trtype": "tcp", 00:33:49.959 "traddr": "10.0.0.2", 00:33:49.959 "adrfam": "ipv4", 00:33:49.959 "trsvcid": "4420", 00:33:49.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.959 "hdgst": false, 00:33:49.959 "ddgst": false 00:33:49.959 }, 00:33:49.959 "method": "bdev_nvme_attach_controller" 00:33:49.959 }' 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.959 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:50.246 12:43:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.507 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:50.507 fio-3.35 00:33:50.507 Starting 1 thread 00:34:02.701 00:34:02.701 filename0: (groupid=0, jobs=1): err= 0: pid=697467: Wed Nov 20 12:43:43 2024 00:34:02.701 read: IOPS=204, BW=818KiB/s (838kB/s)(8208KiB/10031msec) 00:34:02.701 slat (nsec): min=5830, max=33186, avg=6258.29, stdev=1622.61 00:34:02.701 clat (usec): min=374, max=42582, avg=19535.08, stdev=20364.67 00:34:02.701 lat (usec): min=380, max=42588, avg=19541.33, stdev=20364.54 00:34:02.701 clat percentiles (usec): 00:34:02.701 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 412], 00:34:02.701 | 30.00th=[ 420], 40.00th=[ 445], 50.00th=[ 603], 60.00th=[40633], 00:34:02.701 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:02.701 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:02.701 | 99.99th=[42730] 00:34:02.701 bw ( KiB/s): min= 768, max= 1024, per=100.00%, avg=819.25, stdev=64.35, samples=20 00:34:02.701 iops : min= 192, max= 256, avg=204.80, stdev=16.08, samples=20 00:34:02.701 lat (usec) : 500=44.25%, 750=8.77% 00:34:02.701 lat (msec) : 2=0.19%, 50=46.78% 00:34:02.701 cpu : usr=92.55%, sys=7.19%, ctx=8, majf=0, minf=0 00:34:02.701 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.701 issued rwts: total=2052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.701 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.701 00:34:02.701 Run status group 0 (all jobs): 00:34:02.701 READ: bw=818KiB/s (838kB/s), 818KiB/s-818KiB/s (838kB/s-838kB/s), io=8208KiB (8405kB), run=10031-10031msec 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 00:34:02.701 real 0m11.133s 00:34:02.701 user 0m15.984s 00:34:02.701 sys 0m1.014s 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 ************************************ 00:34:02.701 END TEST fio_dif_1_default 00:34:02.701 ************************************ 00:34:02.701 12:43:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:02.701 12:43:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:02.701 12:43:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 ************************************ 00:34:02.701 START TEST fio_dif_1_multi_subsystems 00:34:02.701 ************************************ 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 bdev_null0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 [2024-11-20 12:43:44.227252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.701 bdev_null1 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.701 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.702 { 00:34:02.702 "params": { 00:34:02.702 "name": "Nvme$subsystem", 00:34:02.702 "trtype": "$TEST_TRANSPORT", 00:34:02.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.702 "adrfam": "ipv4", 00:34:02.702 "trsvcid": "$NVMF_PORT", 00:34:02.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.702 "hdgst": ${hdgst:-false}, 00:34:02.702 "ddgst": ${ddgst:-false} 00:34:02.702 }, 00:34:02.702 "method": "bdev_nvme_attach_controller" 00:34:02.702 } 00:34:02.702 EOF 00:34:02.702 )") 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.702 { 00:34:02.702 "params": { 00:34:02.702 "name": "Nvme$subsystem", 00:34:02.702 "trtype": "$TEST_TRANSPORT", 00:34:02.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.702 "adrfam": "ipv4", 00:34:02.702 "trsvcid": "$NVMF_PORT", 00:34:02.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.702 "hdgst": ${hdgst:-false}, 00:34:02.702 "ddgst": ${ddgst:-false} 00:34:02.702 }, 00:34:02.702 "method": "bdev_nvme_attach_controller" 00:34:02.702 } 00:34:02.702 EOF 00:34:02.702 )") 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.702 "params": { 00:34:02.702 "name": "Nvme0", 00:34:02.702 "trtype": "tcp", 00:34:02.702 "traddr": "10.0.0.2", 00:34:02.702 "adrfam": "ipv4", 00:34:02.702 "trsvcid": "4420", 00:34:02.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.702 "hdgst": false, 00:34:02.702 "ddgst": false 00:34:02.702 }, 00:34:02.702 "method": "bdev_nvme_attach_controller" 00:34:02.702 },{ 00:34:02.702 "params": { 00:34:02.702 "name": "Nvme1", 00:34:02.702 "trtype": "tcp", 00:34:02.702 "traddr": "10.0.0.2", 00:34:02.702 "adrfam": "ipv4", 00:34:02.702 "trsvcid": "4420", 00:34:02.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.702 "hdgst": false, 00:34:02.702 "ddgst": false 00:34:02.702 }, 00:34:02.702 "method": "bdev_nvme_attach_controller" 00:34:02.702 }' 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:02.702 12:43:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.702 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:02.702 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:02.702 fio-3.35 00:34:02.702 Starting 2 threads 00:34:12.664 00:34:12.664 filename0: (groupid=0, jobs=1): err= 0: pid=699432: Wed Nov 20 12:43:55 2024 00:34:12.664 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:34:12.664 slat (nsec): min=5959, max=23014, avg=7741.50, stdev=2417.33 00:34:12.664 clat (usec): min=40768, max=42430, avg=40999.92, stdev=163.96 00:34:12.664 lat (usec): min=40774, max=42453, avg=41007.67, stdev=164.39 00:34:12.664 clat percentiles (usec): 00:34:12.664 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:12.664 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:12.664 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:12.664 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:12.664 | 99.99th=[42206] 00:34:12.664 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:34:12.664 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:12.664 lat (msec) : 50=100.00% 00:34:12.665 cpu : usr=96.82%, sys=2.93%, ctx=9, majf=0, minf=57 00:34:12.665 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.665 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.665 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.665 filename1: (groupid=0, jobs=1): err= 0: pid=699433: Wed Nov 20 12:43:55 2024 00:34:12.665 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:34:12.665 slat (nsec): min=5971, max=32150, avg=7764.51, stdev=2695.51 00:34:12.665 clat (usec): min=40775, max=42005, avg=40999.53, stdev=151.55 00:34:12.665 lat (usec): min=40786, max=42016, avg=41007.29, stdev=151.60 00:34:12.665 clat percentiles (usec): 00:34:12.665 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:12.665 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:12.665 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:12.665 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:12.665 | 99.99th=[42206] 00:34:12.665 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:34:12.665 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:12.665 lat (msec) : 50=100.00% 00:34:12.665 cpu : usr=96.90%, sys=2.85%, ctx=7, majf=0, minf=185 00:34:12.665 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.665 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.665 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.665 00:34:12.665 Run status group 0 (all jobs): 00:34:12.665 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10010-10010msec 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 00:34:12.665 real 0m11.236s 00:34:12.665 user 0m26.076s 00:34:12.665 sys 0m0.892s 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 ************************************ 00:34:12.665 END TEST fio_dif_1_multi_subsystems 00:34:12.665 ************************************ 00:34:12.665 12:43:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:12.665 12:43:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:12.665 12:43:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 ************************************ 00:34:12.665 START TEST fio_dif_rand_params 00:34:12.665 ************************************ 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 bdev_null0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.665 [2024-11-20 12:43:55.540790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.665 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:12.665 { 00:34:12.665 "params": { 00:34:12.665 "name": "Nvme$subsystem", 00:34:12.665 "trtype": "$TEST_TRANSPORT", 00:34:12.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.666 "adrfam": "ipv4", 00:34:12.666 "trsvcid": "$NVMF_PORT", 00:34:12.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.666 "hdgst": ${hdgst:-false}, 00:34:12.666 "ddgst": ${ddgst:-false} 00:34:12.666 }, 00:34:12.666 "method": "bdev_nvme_attach_controller" 00:34:12.666 } 00:34:12.666 EOF 00:34:12.666 )") 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:12.666 "params": { 00:34:12.666 "name": "Nvme0", 00:34:12.666 "trtype": "tcp", 00:34:12.666 "traddr": "10.0.0.2", 00:34:12.666 "adrfam": "ipv4", 00:34:12.666 "trsvcid": "4420", 00:34:12.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.666 "hdgst": false, 00:34:12.666 "ddgst": false 00:34:12.666 }, 00:34:12.666 "method": "bdev_nvme_attach_controller" 00:34:12.666 }' 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.666 12:43:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.924 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:12.924 ... 00:34:12.924 fio-3.35 00:34:12.924 Starting 3 threads 00:34:19.490 00:34:19.490 filename0: (groupid=0, jobs=1): err= 0: pid=701393: Wed Nov 20 12:44:01 2024 00:34:19.490 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(201MiB/5006msec) 00:34:19.490 slat (nsec): min=6212, max=25222, avg=10664.40, stdev=1869.00 00:34:19.490 clat (usec): min=3688, max=50715, avg=9314.26, stdev=5408.86 00:34:19.490 lat (usec): min=3694, max=50727, avg=9324.92, stdev=5408.78 00:34:19.490 clat percentiles (usec): 00:34:19.490 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 7832], 00:34:19.490 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:34:19.490 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10814], 00:34:19.490 | 99.00th=[46924], 99.50th=[49021], 99.90th=[49546], 99.95th=[50594], 00:34:19.490 | 99.99th=[50594] 00:34:19.490 bw ( KiB/s): min=24832, max=46080, per=34.55%, avg=41164.80, stdev=5890.84, samples=10 00:34:19.490 iops : min= 194, max= 360, avg=321.60, stdev=46.02, samples=10 00:34:19.490 lat (msec) : 4=0.37%, 10=86.83%, 20=10.93%, 50=1.80%, 100=0.06% 00:34:19.490 cpu : usr=94.55%, sys=5.17%, ctx=8, majf=0, minf=48 00:34:19.490 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.490 issued rwts: total=1610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:19.490 filename0: (groupid=0, jobs=1): err= 0: pid=701394: Wed Nov 20 12:44:01 2024 00:34:19.490 read: IOPS=309, BW=38.6MiB/s (40.5MB/s)(193MiB/5004msec) 00:34:19.490 slat (nsec): min=6235, max=30107, avg=10646.86, stdev=2009.42 00:34:19.490 clat (usec): min=3470, max=50022, avg=9690.79, stdev=4248.22 00:34:19.490 lat (usec): min=3476, max=50032, avg=9701.44, stdev=4248.62 00:34:19.490 clat percentiles (usec): 00:34:19.490 | 1.00th=[ 3654], 5.00th=[ 5538], 10.00th=[ 6521], 20.00th=[ 7963], 00:34:19.490 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:34:19.490 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[11994], 00:34:19.490 | 99.00th=[13960], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:34:19.490 | 99.99th=[50070] 00:34:19.490 bw ( KiB/s): min=36864, max=43520, per=33.36%, avg=39736.89, stdev=2059.08, samples=9 00:34:19.490 iops : min= 288, max= 340, avg=310.44, stdev=16.09, samples=9 00:34:19.490 lat (msec) : 4=2.33%, 10=56.17%, 20=40.53%, 50=0.84%, 100=0.13% 00:34:19.490 cpu : usr=94.32%, sys=5.38%, ctx=8, majf=0, minf=56 00:34:19.490 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.490 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:19.490 filename0: (groupid=0, jobs=1): err= 0: pid=701395: Wed Nov 20 12:44:01 2024 00:34:19.490 read: IOPS=300, BW=37.5MiB/s (39.3MB/s)(188MiB/5004msec) 00:34:19.490 slat (nsec): min=6269, max=26866, avg=10619.26, stdev=1976.10 00:34:19.490 clat (usec): min=3888, max=52116, avg=9980.82, stdev=5896.77 00:34:19.490 lat (usec): min=3895, max=52126, avg=9991.44, stdev=5896.81 00:34:19.490 clat percentiles (usec): 00:34:19.490 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 7308], 20.00th=[ 8094], 00:34:19.490 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:34:19.490 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[11863], 00:34:19.490 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[52167], 00:34:19.490 | 99.99th=[52167] 00:34:19.490 bw ( KiB/s): min=34560, max=40960, per=31.88%, avg=37973.33, stdev=2130.34, samples=9 00:34:19.490 iops : min= 270, max= 320, avg=296.67, stdev=16.64, samples=9 00:34:19.490 lat (msec) : 4=0.27%, 10=71.50%, 20=26.03%, 50=1.80%, 100=0.40% 00:34:19.490 cpu : usr=93.92%, sys=5.78%, ctx=16, majf=0, minf=41 00:34:19.490 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.490 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.490 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:19.490 00:34:19.490 Run status group 0 (all jobs): 00:34:19.490 READ: bw=116MiB/s (122MB/s), 37.5MiB/s-40.2MiB/s (39.3MB/s-42.2MB/s), io=582MiB (611MB), run=5004-5006msec 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.490 bdev_null0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.490 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 [2024-11-20 12:44:01.717938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 bdev_null1 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 bdev_null2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.491 { 00:34:19.491 "params": { 00:34:19.491 "name": "Nvme$subsystem", 00:34:19.491 "trtype": "$TEST_TRANSPORT", 00:34:19.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.491 "adrfam": "ipv4", 00:34:19.491 "trsvcid": "$NVMF_PORT", 00:34:19.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.491 "hdgst": ${hdgst:-false}, 00:34:19.491 "ddgst": ${ddgst:-false} 00:34:19.491 }, 00:34:19.491 "method": "bdev_nvme_attach_controller" 00:34:19.491 } 00:34:19.491 EOF 00:34:19.491 )") 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.491 { 00:34:19.491 "params": { 00:34:19.491 "name": "Nvme$subsystem", 00:34:19.491 "trtype": "$TEST_TRANSPORT", 00:34:19.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.491 "adrfam": "ipv4", 00:34:19.491 "trsvcid": "$NVMF_PORT", 00:34:19.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.491 "hdgst": ${hdgst:-false}, 00:34:19.491 "ddgst": ${ddgst:-false} 00:34:19.491 }, 00:34:19.491 "method": "bdev_nvme_attach_controller" 00:34:19.491 } 00:34:19.491 EOF 00:34:19.491 )") 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.491 { 00:34:19.491 "params": { 00:34:19.491 "name": "Nvme$subsystem", 00:34:19.491 "trtype": "$TEST_TRANSPORT", 00:34:19.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.491 "adrfam": "ipv4", 00:34:19.491 "trsvcid": "$NVMF_PORT", 00:34:19.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.491 "hdgst": ${hdgst:-false}, 00:34:19.491 "ddgst": ${ddgst:-false} 00:34:19.491 }, 00:34:19.491 "method": "bdev_nvme_attach_controller" 00:34:19.491 } 00:34:19.491 EOF 00:34:19.491 )") 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:19.491 12:44:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:19.491 "params": { 00:34:19.491 "name": "Nvme0", 00:34:19.491 "trtype": "tcp", 00:34:19.491 "traddr": "10.0.0.2", 00:34:19.491 "adrfam": "ipv4", 00:34:19.491 "trsvcid": "4420", 00:34:19.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.491 "hdgst": false, 00:34:19.491 "ddgst": false 00:34:19.491 }, 00:34:19.491 "method": "bdev_nvme_attach_controller" 00:34:19.491 },{ 00:34:19.491 "params": { 00:34:19.491 "name": "Nvme1", 00:34:19.491 "trtype": "tcp", 00:34:19.491 "traddr": "10.0.0.2", 00:34:19.491 "adrfam": "ipv4", 00:34:19.491 "trsvcid": "4420", 00:34:19.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:19.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:19.491 "hdgst": false, 00:34:19.491 "ddgst": false 00:34:19.491 }, 00:34:19.492 "method": "bdev_nvme_attach_controller" 00:34:19.492 },{ 00:34:19.492 "params": { 00:34:19.492 "name": "Nvme2", 00:34:19.492 "trtype": "tcp", 00:34:19.492 "traddr": "10.0.0.2", 00:34:19.492 "adrfam": "ipv4", 00:34:19.492 "trsvcid": "4420", 00:34:19.492 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:19.492 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:19.492 "hdgst": false, 00:34:19.492 "ddgst": false 00:34:19.492 }, 00:34:19.492 "method": "bdev_nvme_attach_controller" 00:34:19.492 }' 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:19.492 12:44:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.492 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:19.492 ... 00:34:19.492 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:19.492 ... 00:34:19.492 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:19.492 ... 00:34:19.492 fio-3.35 00:34:19.492 Starting 24 threads 00:34:31.686 00:34:31.686 filename0: (groupid=0, jobs=1): err= 0: pid=702442: Wed Nov 20 12:44:13 2024 00:34:31.686 read: IOPS=576, BW=2306KiB/s (2361kB/s)(22.6MiB/10021msec) 00:34:31.686 slat (nsec): min=7049, max=52695, avg=15243.20, stdev=7318.94 00:34:31.686 clat (usec): min=3917, max=29440, avg=27634.91, stdev=2583.85 00:34:31.686 lat (usec): min=3927, max=29454, avg=27650.16, stdev=2583.32 00:34:31.686 clat percentiles (usec): 00:34:31.686 | 1.00th=[10683], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.686 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.686 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:34:31.686 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29492], 99.95th=[29492], 00:34:31.686 | 99.99th=[29492] 00:34:31.686 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2304.00, stdev=131.33, samples=20 00:34:31.686 iops : min= 544, max= 704, avg=576.00, stdev=32.83, samples=20 00:34:31.686 lat (msec) : 4=0.03%, 10=0.80%, 20=1.11%, 50=98.06% 00:34:31.686 cpu : usr=98.59%, sys=1.04%, ctx=10, majf=0, minf=9 00:34:31.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.686 filename0: (groupid=0, jobs=1): err= 0: pid=702443: Wed Nov 20 12:44:13 2024 00:34:31.686 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10002msec) 00:34:31.686 slat (nsec): min=7004, max=77350, avg=26308.67, stdev=12796.99 00:34:31.686 clat (usec): min=14232, max=70262, avg=27912.95, stdev=1903.85 00:34:31.686 lat (usec): min=14245, max=70310, avg=27939.26, stdev=1904.31 00:34:31.686 clat percentiles (usec): 00:34:31.686 | 1.00th=[25035], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:34:31.686 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.686 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.686 | 99.00th=[29230], 99.50th=[31589], 99.90th=[55313], 99.95th=[55313], 00:34:31.686 | 99.99th=[70779] 00:34:31.686 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2266.32, stdev=70.72, samples=19 00:34:31.686 iops : min= 513, max= 576, avg=566.58, stdev=17.68, samples=19 00:34:31.686 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:34:31.686 cpu : usr=98.51%, sys=1.13%, ctx=7, majf=0, minf=9 00:34:31.686 IO depths : 1=5.5%, 2=11.6%, 4=24.8%, 8=51.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:34:31.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.686 filename0: (groupid=0, jobs=1): err= 0: pid=702444: Wed Nov 20 12:44:13 2024 00:34:31.686 read: IOPS=566, BW=2268KiB/s (2322kB/s)(22.2MiB/10046msec) 00:34:31.686 slat (usec): min=7, max=100, avg=48.58, stdev=20.36 00:34:31.686 clat (usec): min=12008, max=46528, avg=27644.08, stdev=1397.42 00:34:31.686 lat (usec): min=12021, max=46553, avg=27692.66, stdev=1398.71 00:34:31.686 clat percentiles (usec): 00:34:31.686 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:34:31.686 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:34:31.686 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:34:31.686 | 99.00th=[28443], 99.50th=[28705], 99.90th=[46400], 99.95th=[46400], 00:34:31.686 | 99.99th=[46400] 00:34:31.686 bw ( KiB/s): min= 2176, max= 2416, per=4.16%, avg=2277.60, stdev=65.10, samples=20 00:34:31.686 iops : min= 544, max= 604, avg=569.40, stdev=16.28, samples=20 00:34:31.686 lat (msec) : 20=0.49%, 50=99.51% 00:34:31.686 cpu : usr=98.57%, sys=1.06%, ctx=14, majf=0, minf=9 00:34:31.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.686 filename0: (groupid=0, jobs=1): err= 0: pid=702445: Wed Nov 20 12:44:13 2024 00:34:31.686 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10022msec) 00:34:31.686 slat (nsec): min=7411, max=43240, avg=20844.17, stdev=6528.68 00:34:31.686 clat (usec): min=10474, max=29401, avg=27823.85, stdev=1220.61 00:34:31.686 lat (usec): min=10489, max=29416, avg=27844.69, stdev=1220.88 00:34:31.686 clat percentiles (usec): 00:34:31.686 | 1.00th=[22152], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.686 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.686 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.686 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29230], 99.95th=[29492], 00:34:31.686 | 99.99th=[29492] 00:34:31.686 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2284.80, stdev=62.64, samples=20 00:34:31.686 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:31.686 lat (msec) : 20=0.52%, 50=99.48% 00:34:31.686 cpu : usr=98.65%, sys=1.00%, ctx=14, majf=0, minf=9 00:34:31.686 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.686 filename0: (groupid=0, jobs=1): err= 0: pid=702446: Wed Nov 20 12:44:13 2024 00:34:31.686 read: IOPS=574, BW=2300KiB/s (2355kB/s)(22.5MiB/10003msec) 00:34:31.686 slat (nsec): min=6992, max=41904, avg=11493.40, stdev=4534.15 00:34:31.686 clat (usec): min=4779, max=39527, avg=27729.01, stdev=2298.97 00:34:31.686 lat (usec): min=4792, max=39535, avg=27740.50, stdev=2298.64 00:34:31.686 clat percentiles (usec): 00:34:31.686 | 1.00th=[11731], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.686 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.686 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.686 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:34:31.686 | 99.99th=[39584] 00:34:31.686 bw ( KiB/s): min= 2176, max= 2744, per=4.21%, avg=2300.21, stdev=119.92, samples=19 00:34:31.686 iops : min= 544, max= 686, avg=575.05, stdev=29.98, samples=19 00:34:31.686 lat (msec) : 10=0.80%, 20=0.75%, 50=98.45% 00:34:31.686 cpu : usr=98.61%, sys=1.03%, ctx=10, majf=0, minf=9 00:34:31.686 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.686 issued rwts: total=5751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.686 filename0: (groupid=0, jobs=1): err= 0: pid=702447: Wed Nov 20 12:44:13 2024 00:34:31.686 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:31.686 slat (nsec): min=7781, max=80581, avg=33830.29, stdev=15489.52 00:34:31.686 clat (usec): min=19641, max=29247, avg=27793.95, stdev=518.19 00:34:31.687 lat (usec): min=19652, max=29269, avg=27827.78, stdev=517.43 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:31.687 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.687 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:34:31.687 | 99.99th=[29230] 00:34:31.687 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.687 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.687 lat (msec) : 20=0.23%, 50=99.77% 00:34:31.687 cpu : usr=97.85%, sys=1.38%, ctx=156, majf=0, minf=9 00:34:31.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename0: (groupid=0, jobs=1): err= 0: pid=702448: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:34:31.687 slat (nsec): min=4029, max=49333, avg=23040.62, stdev=6942.87 00:34:31.687 clat (usec): min=14329, max=42463, avg=27901.57, stdev=1119.94 00:34:31.687 lat (usec): min=14341, max=42477, avg=27924.62, stdev=1119.49 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.687 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.687 | 99.00th=[28705], 99.50th=[29230], 99.90th=[42206], 99.95th=[42206], 00:34:31.687 | 99.99th=[42206] 00:34:31.687 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.687 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.687 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.687 cpu : usr=98.61%, sys=1.03%, ctx=7, majf=0, minf=9 00:34:31.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename0: (groupid=0, jobs=1): err= 0: pid=702449: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=576, BW=2307KiB/s (2362kB/s)(22.6MiB/10016msec) 00:34:31.687 slat (nsec): min=7069, max=49295, avg=19232.11, stdev=7031.75 00:34:31.687 clat (usec): min=4531, max=32082, avg=27587.76, stdev=2684.08 00:34:31.687 lat (usec): min=4542, max=32097, avg=27606.99, stdev=2684.10 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[ 9503], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.687 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:34:31.687 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29230], 99.95th=[29492], 00:34:31.687 | 99.99th=[32113] 00:34:31.687 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2304.00, stdev=131.33, samples=20 00:34:31.687 iops : min= 544, max= 704, avg=576.00, stdev=32.83, samples=20 00:34:31.687 lat (msec) : 10=1.11%, 20=0.83%, 50=98.06% 00:34:31.687 cpu : usr=98.61%, sys=1.03%, ctx=15, majf=0, minf=9 00:34:31.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename1: (groupid=0, jobs=1): err= 0: pid=702450: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:34:31.687 slat (nsec): min=4606, max=52670, avg=22876.77, stdev=7169.76 00:34:31.687 clat (usec): min=14155, max=42389, avg=27893.93, stdev=1121.25 00:34:31.687 lat (usec): min=14180, max=42405, avg=27916.80, stdev=1120.96 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.687 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.687 | 99.00th=[28705], 99.50th=[29230], 99.90th=[42206], 99.95th=[42206], 00:34:31.687 | 99.99th=[42206] 00:34:31.687 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.687 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.687 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.687 cpu : usr=98.47%, sys=1.17%, ctx=9, majf=0, minf=9 00:34:31.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename1: (groupid=0, jobs=1): err= 0: pid=702451: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.3MiB/10004msec) 00:34:31.687 slat (nsec): min=5454, max=47845, avg=21730.42, stdev=6958.95 00:34:31.687 clat (usec): min=14232, max=64056, avg=27892.98, stdev=2019.18 00:34:31.687 lat (usec): min=14246, max=64071, avg=27914.71, stdev=2019.23 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[20055], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.687 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.687 | 99.00th=[35390], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:34:31.687 | 99.99th=[64226] 00:34:31.687 bw ( KiB/s): min= 2148, max= 2352, per=4.15%, avg=2271.37, stdev=63.22, samples=19 00:34:31.687 iops : min= 537, max= 588, avg=567.84, stdev=15.81, samples=19 00:34:31.687 lat (msec) : 20=0.77%, 50=99.19%, 100=0.04% 00:34:31.687 cpu : usr=98.65%, sys=1.00%, ctx=12, majf=0, minf=9 00:34:31.687 IO depths : 1=5.8%, 2=11.6%, 4=23.8%, 8=51.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename1: (groupid=0, jobs=1): err= 0: pid=702452: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:34:31.687 slat (nsec): min=7211, max=97967, avg=38285.79, stdev=23196.93 00:34:31.687 clat (usec): min=11978, max=48515, avg=27716.60, stdev=1475.77 00:34:31.687 lat (usec): min=11992, max=48528, avg=27754.89, stdev=1476.02 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:31.687 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.687 | 99.00th=[28443], 99.50th=[28705], 99.90th=[48497], 99.95th=[48497], 00:34:31.687 | 99.99th=[48497] 00:34:31.687 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.53, stdev=57.55, samples=19 00:34:31.687 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:34:31.687 lat (msec) : 20=0.54%, 50=99.46% 00:34:31.687 cpu : usr=98.59%, sys=1.06%, ctx=12, majf=0, minf=9 00:34:31.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename1: (groupid=0, jobs=1): err= 0: pid=702453: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=589, BW=2357KiB/s (2413kB/s)(23.0MiB/10007msec) 00:34:31.687 slat (nsec): min=5746, max=89325, avg=15889.82, stdev=13352.54 00:34:31.687 clat (usec): min=11876, max=55191, avg=27089.62, stdev=3981.85 00:34:31.687 lat (usec): min=11891, max=55210, avg=27105.51, stdev=3980.16 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[14615], 5.00th=[20317], 10.00th=[22152], 20.00th=[23725], 00:34:31.687 | 30.00th=[25297], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[28181], 90.00th=[32900], 95.00th=[33817], 00:34:31.687 | 99.00th=[36439], 99.50th=[36963], 99.90th=[42206], 99.95th=[42206], 00:34:31.687 | 99.99th=[55313] 00:34:31.687 bw ( KiB/s): min= 2160, max= 2544, per=4.30%, avg=2351.16, stdev=82.53, samples=19 00:34:31.687 iops : min= 540, max= 636, avg=587.79, stdev=20.63, samples=19 00:34:31.687 lat (msec) : 20=4.58%, 50=95.39%, 100=0.03% 00:34:31.687 cpu : usr=98.47%, sys=1.18%, ctx=11, majf=0, minf=13 00:34:31.687 IO depths : 1=0.1%, 2=0.1%, 4=2.5%, 8=81.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:34:31.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 complete : 0=0.0%, 4=88.9%, 8=9.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.687 issued rwts: total=5896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.687 filename1: (groupid=0, jobs=1): err= 0: pid=702454: Wed Nov 20 12:44:13 2024 00:34:31.687 read: IOPS=570, BW=2282KiB/s (2336kB/s)(22.3MiB/10014msec) 00:34:31.687 slat (nsec): min=7226, max=37377, avg=17798.69, stdev=5203.50 00:34:31.687 clat (usec): min=12181, max=33527, avg=27896.38, stdev=972.60 00:34:31.687 lat (usec): min=12194, max=33545, avg=27914.18, stdev=972.72 00:34:31.687 clat percentiles (usec): 00:34:31.687 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.687 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.687 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:34:31.687 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:34:31.687 | 99.99th=[33424] 00:34:31.687 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2278.40, stdev=52.53, samples=20 00:34:31.687 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:34:31.687 lat (msec) : 20=0.32%, 50=99.68% 00:34:31.687 cpu : usr=98.64%, sys=1.00%, ctx=13, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename1: (groupid=0, jobs=1): err= 0: pid=702455: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:31.688 slat (nsec): min=6186, max=97577, avg=40684.26, stdev=22179.30 00:34:31.688 clat (usec): min=18218, max=37685, avg=27710.05, stdev=603.95 00:34:31.688 lat (usec): min=18230, max=37708, avg=27750.73, stdev=604.18 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:34:31.688 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.688 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:34:31.688 | 99.99th=[37487] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.688 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.688 lat (msec) : 20=0.30%, 50=99.70% 00:34:31.688 cpu : usr=98.72%, sys=0.92%, ctx=14, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename1: (groupid=0, jobs=1): err= 0: pid=702456: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:34:31.688 slat (nsec): min=4046, max=37752, avg=17543.62, stdev=4962.37 00:34:31.688 clat (usec): min=17470, max=36204, avg=27954.50, stdev=771.82 00:34:31.688 lat (usec): min=17486, max=36217, avg=27972.05, stdev=771.57 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.688 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.688 | 99.00th=[29230], 99.50th=[29492], 99.90th=[36439], 99.95th=[36439], 00:34:31.688 | 99.99th=[36439] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.688 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.688 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.688 cpu : usr=98.46%, sys=1.19%, ctx=10, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename1: (groupid=0, jobs=1): err= 0: pid=702457: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=570, BW=2282KiB/s (2336kB/s)(22.3MiB/10014msec) 00:34:31.688 slat (nsec): min=6757, max=37450, avg=18169.68, stdev=4964.02 00:34:31.688 clat (usec): min=12148, max=34136, avg=27885.89, stdev=971.74 00:34:31.688 lat (usec): min=12163, max=34156, avg=27904.06, stdev=971.95 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.688 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:34:31.688 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:34:31.688 | 99.99th=[34341] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2308, per=4.17%, avg=2278.60, stdev=52.64, samples=20 00:34:31.688 iops : min= 544, max= 577, avg=569.65, stdev=13.16, samples=20 00:34:31.688 lat (msec) : 20=0.32%, 50=99.68% 00:34:31.688 cpu : usr=98.56%, sys=1.08%, ctx=12, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename2: (groupid=0, jobs=1): err= 0: pid=702458: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10002msec) 00:34:31.688 slat (nsec): min=8299, max=98792, avg=38820.24, stdev=20400.85 00:34:31.688 clat (usec): min=19152, max=38021, avg=27733.86, stdev=583.33 00:34:31.688 lat (usec): min=19160, max=38043, avg=27772.68, stdev=583.59 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:34:31.688 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.688 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:34:31.688 | 99.99th=[38011] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.688 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.688 lat (msec) : 20=0.18%, 50=99.82% 00:34:31.688 cpu : usr=98.37%, sys=1.19%, ctx=52, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename2: (groupid=0, jobs=1): err= 0: pid=702459: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10007msec) 00:34:31.688 slat (nsec): min=5659, max=99585, avg=40654.44, stdev=22279.72 00:34:31.688 clat (usec): min=10624, max=37702, avg=27574.18, stdev=1567.76 00:34:31.688 lat (usec): min=10646, max=37720, avg=27614.84, stdev=1570.29 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[16909], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:34:31.688 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.688 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:34:31.688 | 99.99th=[37487] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2284.80, stdev=62.64, samples=20 00:34:31.688 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:34:31.688 lat (msec) : 20=1.15%, 50=98.85% 00:34:31.688 cpu : usr=98.50%, sys=1.15%, ctx=15, majf=0, minf=9 00:34:31.688 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename2: (groupid=0, jobs=1): err= 0: pid=702460: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:34:31.688 slat (nsec): min=6003, max=52430, avg=23009.66, stdev=7477.25 00:34:31.688 clat (usec): min=21999, max=29550, avg=27906.53, stdev=427.43 00:34:31.688 lat (usec): min=22036, max=29567, avg=27929.54, stdev=426.64 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.688 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.688 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:34:31.688 | 99.99th=[29492] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:34:31.688 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:34:31.688 lat (msec) : 50=100.00% 00:34:31.688 cpu : usr=98.74%, sys=0.91%, ctx=14, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename2: (groupid=0, jobs=1): err= 0: pid=702461: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10007msec) 00:34:31.688 slat (nsec): min=6936, max=96690, avg=32270.81, stdev=23115.51 00:34:31.688 clat (usec): min=10419, max=29083, avg=27703.38, stdev=1584.54 00:34:31.688 lat (usec): min=10432, max=29098, avg=27735.65, stdev=1583.75 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[16712], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:34:31.688 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.688 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.688 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:34:31.688 | 99.99th=[28967] 00:34:31.688 bw ( KiB/s): min= 2176, max= 2436, per=4.18%, avg=2285.00, stdev=63.14, samples=20 00:34:31.688 iops : min= 544, max= 609, avg=571.25, stdev=15.78, samples=20 00:34:31.688 lat (msec) : 20=1.12%, 50=98.88% 00:34:31.688 cpu : usr=98.64%, sys=0.99%, ctx=16, majf=0, minf=9 00:34:31.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.688 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.688 filename2: (groupid=0, jobs=1): err= 0: pid=702462: Wed Nov 20 12:44:13 2024 00:34:31.688 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:34:31.688 slat (nsec): min=4019, max=49848, avg=23420.74, stdev=7167.85 00:34:31.688 clat (usec): min=14289, max=42662, avg=27893.09, stdev=1128.31 00:34:31.688 lat (usec): min=14303, max=42675, avg=27916.51, stdev=1127.92 00:34:31.688 clat percentiles (usec): 00:34:31.688 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.689 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.689 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.689 | 99.00th=[28705], 99.50th=[29230], 99.90th=[42730], 99.95th=[42730], 00:34:31.689 | 99.99th=[42730] 00:34:31.689 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:34:31.689 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:34:31.689 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.689 cpu : usr=98.29%, sys=1.34%, ctx=15, majf=0, minf=9 00:34:31.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.689 filename2: (groupid=0, jobs=1): err= 0: pid=702463: Wed Nov 20 12:44:13 2024 00:34:31.689 read: IOPS=576, BW=2305KiB/s (2361kB/s)(22.6MiB/10022msec) 00:34:31.689 slat (nsec): min=6938, max=89086, avg=27071.14, stdev=16061.24 00:34:31.689 clat (usec): min=4430, max=29103, avg=27546.16, stdev=2592.14 00:34:31.689 lat (usec): min=4442, max=29128, avg=27573.24, stdev=2592.97 00:34:31.689 clat percentiles (usec): 00:34:31.689 | 1.00th=[ 8717], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:34:31.689 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.689 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:34:31.689 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:34:31.689 | 99.99th=[29230] 00:34:31.689 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2304.00, stdev=131.33, samples=20 00:34:31.689 iops : min= 544, max= 704, avg=576.00, stdev=32.83, samples=20 00:34:31.689 lat (msec) : 10=1.00%, 20=0.93%, 50=98.06% 00:34:31.689 cpu : usr=98.34%, sys=1.22%, ctx=83, majf=0, minf=9 00:34:31.689 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.689 filename2: (groupid=0, jobs=1): err= 0: pid=702464: Wed Nov 20 12:44:13 2024 00:34:31.689 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:34:31.689 slat (nsec): min=6045, max=94789, avg=42926.04, stdev=22496.81 00:34:31.689 clat (usec): min=12029, max=46677, avg=27669.72, stdev=1401.43 00:34:31.689 lat (usec): min=12043, max=46704, avg=27712.65, stdev=1402.52 00:34:31.689 clat percentiles (usec): 00:34:31.689 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:34:31.689 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:34:31.689 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:34:31.689 | 99.00th=[28443], 99.50th=[28705], 99.90th=[46400], 99.95th=[46400], 00:34:31.689 | 99.99th=[46924] 00:34:31.689 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.53, stdev=57.55, samples=19 00:34:31.689 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:34:31.689 lat (msec) : 20=0.54%, 50=99.46% 00:34:31.689 cpu : usr=98.57%, sys=1.04%, ctx=12, majf=0, minf=9 00:34:31.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.689 filename2: (groupid=0, jobs=1): err= 0: pid=702465: Wed Nov 20 12:44:13 2024 00:34:31.689 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:34:31.689 slat (nsec): min=6783, max=92218, avg=28925.10, stdev=18138.74 00:34:31.689 clat (usec): min=6082, max=55603, avg=27838.29, stdev=2051.58 00:34:31.689 lat (usec): min=6090, max=55646, avg=27867.22, stdev=2051.65 00:34:31.689 clat percentiles (usec): 00:34:31.689 | 1.00th=[25297], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:34:31.689 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.689 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:34:31.689 | 99.00th=[29230], 99.50th=[29492], 99.90th=[55313], 99.95th=[55313], 00:34:31.689 | 99.99th=[55837] 00:34:31.689 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2270.53, stdev=71.25, samples=19 00:34:31.689 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:34:31.689 lat (msec) : 10=0.28%, 20=0.42%, 50=99.02%, 100=0.28% 00:34:31.689 cpu : usr=98.45%, sys=1.18%, ctx=15, majf=0, minf=9 00:34:31.689 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:31.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.689 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.689 00:34:31.689 Run status group 0 (all jobs): 00:34:31.689 READ: bw=53.4MiB/s (56.0MB/s), 2268KiB/s-2357KiB/s (2322kB/s-2413kB/s), io=536MiB (562MB), run=10002-10046msec 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 bdev_null0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.689 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.690 [2024-11-20 12:44:13.533524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.690 bdev_null1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:31.690 { 00:34:31.690 "params": { 00:34:31.690 "name": "Nvme$subsystem", 00:34:31.690 "trtype": "$TEST_TRANSPORT", 00:34:31.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.690 "adrfam": "ipv4", 00:34:31.690 "trsvcid": "$NVMF_PORT", 00:34:31.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.690 "hdgst": ${hdgst:-false}, 00:34:31.690 "ddgst": ${ddgst:-false} 00:34:31.690 }, 00:34:31.690 "method": "bdev_nvme_attach_controller" 00:34:31.690 } 00:34:31.690 EOF 00:34:31.690 )") 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:31.690 { 00:34:31.690 "params": { 00:34:31.690 "name": "Nvme$subsystem", 00:34:31.690 "trtype": "$TEST_TRANSPORT", 00:34:31.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.690 "adrfam": "ipv4", 00:34:31.690 "trsvcid": "$NVMF_PORT", 00:34:31.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.690 "hdgst": ${hdgst:-false}, 00:34:31.690 "ddgst": ${ddgst:-false} 00:34:31.690 }, 00:34:31.690 "method": "bdev_nvme_attach_controller" 00:34:31.690 } 00:34:31.690 EOF 00:34:31.690 )") 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:31.690 "params": { 00:34:31.690 "name": "Nvme0", 00:34:31.690 "trtype": "tcp", 00:34:31.690 "traddr": "10.0.0.2", 00:34:31.690 "adrfam": "ipv4", 00:34:31.690 "trsvcid": "4420", 00:34:31.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.690 "hdgst": false, 00:34:31.690 "ddgst": false 00:34:31.690 }, 00:34:31.690 "method": "bdev_nvme_attach_controller" 00:34:31.690 },{ 00:34:31.690 "params": { 00:34:31.690 "name": "Nvme1", 00:34:31.690 "trtype": "tcp", 00:34:31.690 "traddr": "10.0.0.2", 00:34:31.690 "adrfam": "ipv4", 00:34:31.690 "trsvcid": "4420", 00:34:31.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:31.690 "hdgst": false, 00:34:31.690 "ddgst": false 00:34:31.690 }, 00:34:31.690 "method": "bdev_nvme_attach_controller" 00:34:31.690 }' 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.690 12:44:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.690 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.690 ... 00:34:31.690 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.690 ... 00:34:31.690 fio-3.35 00:34:31.690 Starting 4 threads 00:34:36.955 00:34:36.955 filename0: (groupid=0, jobs=1): err= 0: pid=704418: Wed Nov 20 12:44:19 2024 00:34:36.955 read: IOPS=2801, BW=21.9MiB/s (22.9MB/s)(109MiB/5002msec) 00:34:36.955 slat (nsec): min=6135, max=31034, avg=8912.63, stdev=3060.54 00:34:36.955 clat (usec): min=872, max=5492, avg=2827.40, stdev=402.11 00:34:36.955 lat (usec): min=886, max=5504, avg=2836.31, stdev=401.99 00:34:36.955 clat percentiles (usec): 00:34:36.955 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2507], 00:34:36.955 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 2999], 00:34:36.956 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3359], 00:34:36.956 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 4948], 00:34:36.956 | 99.99th=[ 5473] 00:34:36.956 bw ( KiB/s): min=21040, max=23248, per=26.86%, avg=22416.00, stdev=877.46, samples=10 00:34:36.956 iops : min= 2630, max= 2906, avg=2802.00, stdev=109.68, samples=10 00:34:36.956 lat (usec) : 1000=0.06% 00:34:36.956 lat (msec) : 2=2.02%, 4=96.71%, 10=1.21% 00:34:36.956 cpu : usr=95.62%, sys=4.08%, ctx=9, majf=0, minf=9 00:34:36.956 IO depths : 1=0.4%, 2=8.2%, 4=62.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 issued rwts: total=14013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.956 filename0: (groupid=0, jobs=1): err= 0: pid=704419: Wed Nov 20 12:44:19 2024 00:34:36.956 read: IOPS=2540, BW=19.9MiB/s (20.8MB/s)(99.3MiB/5002msec) 00:34:36.956 slat (nsec): min=6128, max=27599, avg=8750.08, stdev=3003.71 00:34:36.956 clat (usec): min=688, max=5610, avg=3122.50, stdev=448.59 00:34:36.956 lat (usec): min=699, max=5616, avg=3131.25, stdev=448.35 00:34:36.956 clat percentiles (usec): 00:34:36.956 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2704], 20.00th=[ 2900], 00:34:36.956 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3032], 60.00th=[ 3064], 00:34:36.956 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3687], 95.00th=[ 3982], 00:34:36.956 | 99.00th=[ 4817], 99.50th=[ 5145], 99.90th=[ 5538], 99.95th=[ 5604], 00:34:36.956 | 99.99th=[ 5604] 00:34:36.956 bw ( KiB/s): min=19552, max=21088, per=24.36%, avg=20332.10, stdev=530.76, samples=10 00:34:36.956 iops : min= 2444, max= 2636, avg=2541.50, stdev=66.34, samples=10 00:34:36.956 lat (usec) : 750=0.03%, 1000=0.01% 00:34:36.956 lat (msec) : 2=0.32%, 4=94.92%, 10=4.72% 00:34:36.956 cpu : usr=96.18%, sys=3.50%, ctx=8, majf=0, minf=9 00:34:36.956 IO depths : 1=0.1%, 2=2.9%, 4=68.6%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 issued rwts: total=12710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.956 filename1: (groupid=0, jobs=1): err= 0: pid=704420: Wed Nov 20 12:44:19 2024 00:34:36.956 read: IOPS=2636, BW=20.6MiB/s (21.6MB/s)(103MiB/5002msec) 00:34:36.956 slat (nsec): min=6143, max=31899, avg=8765.27, stdev=3040.27 00:34:36.956 clat (usec): min=782, max=5445, avg=3007.53, stdev=408.99 00:34:36.956 lat (usec): min=798, max=5455, avg=3016.30, stdev=408.76 00:34:36.956 clat percentiles (usec): 00:34:36.956 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:34:36.956 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3032], 00:34:36.956 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3752], 00:34:36.956 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 5080], 99.95th=[ 5145], 00:34:36.956 | 99.99th=[ 5407] 00:34:36.956 bw ( KiB/s): min=20608, max=21728, per=25.27%, avg=21092.80, stdev=333.67, samples=10 00:34:36.956 iops : min= 2576, max= 2716, avg=2636.60, stdev=41.71, samples=10 00:34:36.956 lat (usec) : 1000=0.01% 00:34:36.956 lat (msec) : 2=0.84%, 4=96.59%, 10=2.56% 00:34:36.956 cpu : usr=96.14%, sys=3.56%, ctx=7, majf=0, minf=9 00:34:36.956 IO depths : 1=0.3%, 2=4.1%, 4=68.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 issued rwts: total=13190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.956 filename1: (groupid=0, jobs=1): err= 0: pid=704421: Wed Nov 20 12:44:19 2024 00:34:36.956 read: IOPS=2455, BW=19.2MiB/s (20.1MB/s)(96.0MiB/5003msec) 00:34:36.956 slat (nsec): min=6092, max=32410, avg=8485.93, stdev=2973.77 00:34:36.956 clat (usec): min=1030, max=5949, avg=3233.19, stdev=452.90 00:34:36.956 lat (usec): min=1037, max=5955, avg=3241.67, stdev=452.82 00:34:36.956 clat percentiles (usec): 00:34:36.956 | 1.00th=[ 2409], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 2999], 00:34:36.956 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:36.956 | 70.00th=[ 3294], 80.00th=[ 3458], 90.00th=[ 3785], 95.00th=[ 4146], 00:34:36.956 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5538], 99.95th=[ 5669], 00:34:36.956 | 99.99th=[ 5932] 00:34:36.956 bw ( KiB/s): min=18304, max=20912, per=23.53%, avg=19640.00, stdev=805.85, samples=10 00:34:36.956 iops : min= 2288, max= 2614, avg=2455.00, stdev=100.73, samples=10 00:34:36.956 lat (msec) : 2=0.16%, 4=93.49%, 10=6.35% 00:34:36.956 cpu : usr=96.14%, sys=3.56%, ctx=8, majf=0, minf=9 00:34:36.956 IO depths : 1=0.1%, 2=1.0%, 4=72.4%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.956 issued rwts: total=12283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.956 00:34:36.956 Run status group 0 (all jobs): 00:34:36.956 READ: bw=81.5MiB/s (85.5MB/s), 19.2MiB/s-21.9MiB/s (20.1MB/s-22.9MB/s), io=408MiB (428MB), run=5002-5003msec 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 00:34:36.956 real 0m24.250s 00:34:36.956 user 4m52.077s 00:34:36.956 sys 0m5.196s 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 ************************************ 00:34:36.956 END TEST fio_dif_rand_params 00:34:36.956 ************************************ 00:34:36.956 12:44:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:36.956 12:44:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:36.956 12:44:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 ************************************ 00:34:36.956 START TEST fio_dif_digest 00:34:36.956 ************************************ 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 bdev_null0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.956 [2024-11-20 12:44:19.863045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:36.956 { 00:34:36.956 "params": { 00:34:36.956 "name": "Nvme$subsystem", 00:34:36.956 "trtype": "$TEST_TRANSPORT", 00:34:36.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.956 "adrfam": "ipv4", 00:34:36.956 "trsvcid": "$NVMF_PORT", 00:34:36.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.956 "hdgst": ${hdgst:-false}, 00:34:36.956 "ddgst": ${ddgst:-false} 00:34:36.956 }, 00:34:36.956 "method": "bdev_nvme_attach_controller" 00:34:36.956 } 00:34:36.956 EOF 00:34:36.956 )") 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:36.956 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:36.957 "params": { 00:34:36.957 "name": "Nvme0", 00:34:36.957 "trtype": "tcp", 00:34:36.957 "traddr": "10.0.0.2", 00:34:36.957 "adrfam": "ipv4", 00:34:36.957 "trsvcid": "4420", 00:34:36.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.957 "hdgst": true, 00:34:36.957 "ddgst": true 00:34:36.957 }, 00:34:36.957 "method": "bdev_nvme_attach_controller" 00:34:36.957 }' 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.957 12:44:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.219 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:37.219 ... 00:34:37.219 fio-3.35 00:34:37.219 Starting 3 threads 00:34:49.424 00:34:49.424 filename0: (groupid=0, jobs=1): err= 0: pid=705654: Wed Nov 20 12:44:30 2024 00:34:49.424 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(344MiB/10047msec) 00:34:49.424 slat (usec): min=6, max=123, avg=18.42, stdev= 7.48 00:34:49.424 clat (usec): min=8221, max=50568, avg=10928.24, stdev=1271.07 00:34:49.424 lat (usec): min=8234, max=50594, avg=10946.66, stdev=1271.29 00:34:49.424 clat percentiles (usec): 00:34:49.424 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:34:49.424 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:34:49.424 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:34:49.424 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14746], 99.95th=[47973], 00:34:49.424 | 99.99th=[50594] 00:34:49.424 bw ( KiB/s): min=34048, max=36096, per=32.99%, avg=35220.21, stdev=485.09, samples=19 00:34:49.424 iops : min= 266, max= 282, avg=275.16, stdev= 3.79, samples=19 00:34:49.424 lat (msec) : 10=10.84%, 20=89.09%, 50=0.04%, 100=0.04% 00:34:49.424 cpu : usr=97.33%, sys=2.36%, ctx=16, majf=0, minf=158 00:34:49.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.424 issued rwts: total=2749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.424 filename0: (groupid=0, jobs=1): err= 0: pid=705655: Wed Nov 20 12:44:30 2024 00:34:49.424 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(332MiB/10047msec) 00:34:49.424 slat (nsec): min=6464, max=44010, avg=17345.07, stdev=6926.27 00:34:49.424 clat (usec): min=8566, max=49088, avg=11308.69, stdev=1268.74 00:34:49.424 lat (usec): min=8583, max=49099, avg=11326.03, stdev=1268.74 00:34:49.424 clat percentiles (usec): 00:34:49.424 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:34:49.424 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:34:49.424 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:49.424 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[47449], 00:34:49.424 | 99.99th=[49021] 00:34:49.424 bw ( KiB/s): min=33024, max=35328, per=31.90%, avg=34061.47, stdev=658.08, samples=19 00:34:49.424 iops : min= 258, max= 276, avg=266.11, stdev= 5.14, samples=19 00:34:49.424 lat (msec) : 10=3.80%, 20=96.12%, 50=0.08% 00:34:49.424 cpu : usr=97.07%, sys=2.62%, ctx=16, majf=0, minf=53 00:34:49.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.424 issued rwts: total=2657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.424 filename0: (groupid=0, jobs=1): err= 0: pid=705656: Wed Nov 20 12:44:30 2024 00:34:49.424 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(372MiB/10008msec) 00:34:49.424 slat (nsec): min=6759, max=49601, avg=22198.76, stdev=5954.09 00:34:49.424 clat (usec): min=5387, max=12353, avg=10066.99, stdev=690.63 00:34:49.424 lat (usec): min=5396, max=12374, avg=10089.19, stdev=690.35 00:34:49.424 clat percentiles (usec): 00:34:49.424 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:34:49.424 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:34:49.424 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:34:49.424 | 99.00th=[11600], 99.50th=[11863], 99.90th=[12256], 99.95th=[12256], 00:34:49.424 | 99.99th=[12387] 00:34:49.424 bw ( KiB/s): min=37120, max=39168, per=35.63%, avg=38049.68, stdev=541.11, samples=19 00:34:49.424 iops : min= 290, max= 306, avg=297.26, stdev= 4.23, samples=19 00:34:49.424 lat (msec) : 10=44.13%, 20=55.87% 00:34:49.424 cpu : usr=96.19%, sys=3.49%, ctx=17, majf=0, minf=113 00:34:49.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.424 issued rwts: total=2975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.424 00:34:49.424 Run status group 0 (all jobs): 00:34:49.424 READ: bw=104MiB/s (109MB/s), 33.1MiB/s-37.2MiB/s (34.7MB/s-39.0MB/s), io=1048MiB (1099MB), run=10008-10047msec 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.424 00:34:49.424 real 0m11.288s 00:34:49.424 user 0m35.839s 00:34:49.424 sys 0m1.128s 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.424 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.424 ************************************ 00:34:49.424 END TEST fio_dif_digest 00:34:49.424 ************************************ 00:34:49.424 12:44:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:49.424 12:44:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.424 rmmod nvme_tcp 00:34:49.424 rmmod nvme_fabrics 00:34:49.424 rmmod nvme_keyring 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 697105 ']' 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 697105 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 697105 ']' 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 697105 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 697105 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 697105' 00:34:49.424 killing process with pid 697105 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@973 -- # kill 697105 00:34:49.424 12:44:31 nvmf_dif -- common/autotest_common.sh@978 -- # wait 697105 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:49.424 12:44:31 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:51.332 Waiting for block devices as requested 00:34:51.332 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:51.332 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:51.332 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:51.332 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:51.591 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:51.591 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:51.592 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:51.850 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:51.850 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:51.850 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:52.110 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:52.110 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:52.110 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:52.110 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:52.369 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:52.369 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:52.369 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:52.628 12:44:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.628 12:44:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:52.628 12:44:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.534 12:44:37 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.534 00:34:54.534 real 1m13.953s 00:34:54.534 user 7m9.428s 00:34:54.534 sys 0m20.152s 00:34:54.534 12:44:37 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.534 12:44:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:54.534 ************************************ 00:34:54.534 END TEST nvmf_dif 00:34:54.534 ************************************ 00:34:54.534 12:44:37 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:54.534 12:44:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:54.534 12:44:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.534 12:44:37 -- common/autotest_common.sh@10 -- # set +x 00:34:54.793 ************************************ 00:34:54.793 START TEST nvmf_abort_qd_sizes 00:34:54.793 ************************************ 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:54.793 * Looking for test storage... 00:34:54.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:54.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.793 --rc genhtml_branch_coverage=1 00:34:54.793 --rc genhtml_function_coverage=1 00:34:54.793 --rc genhtml_legend=1 00:34:54.793 --rc geninfo_all_blocks=1 00:34:54.793 --rc geninfo_unexecuted_blocks=1 00:34:54.793 00:34:54.793 ' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:54.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.793 --rc genhtml_branch_coverage=1 00:34:54.793 --rc genhtml_function_coverage=1 00:34:54.793 --rc genhtml_legend=1 00:34:54.793 --rc geninfo_all_blocks=1 00:34:54.793 --rc geninfo_unexecuted_blocks=1 00:34:54.793 00:34:54.793 ' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:54.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.793 --rc genhtml_branch_coverage=1 00:34:54.793 --rc genhtml_function_coverage=1 00:34:54.793 --rc genhtml_legend=1 00:34:54.793 --rc geninfo_all_blocks=1 00:34:54.793 --rc geninfo_unexecuted_blocks=1 00:34:54.793 00:34:54.793 ' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:54.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.793 --rc genhtml_branch_coverage=1 00:34:54.793 --rc genhtml_function_coverage=1 00:34:54.793 --rc genhtml_legend=1 00:34:54.793 --rc geninfo_all_blocks=1 00:34:54.793 --rc geninfo_unexecuted_blocks=1 00:34:54.793 00:34:54.793 ' 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.793 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:54.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:54.794 12:44:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.503 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:01.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:01.504 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:01.504 Found net devices under 0000:86:00.0: cvl_0_0 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:01.504 Found net devices under 0000:86:00.1: cvl_0_1 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:35:01.504 00:35:01.504 --- 10.0.0.2 ping statistics --- 00:35:01.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.504 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:35:01.504 00:35:01.504 --- 10.0.0.1 ping statistics --- 00:35:01.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.504 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:01.504 12:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:03.408 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:03.408 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:03.408 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:03.408 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:03.408 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:03.408 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:03.408 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:03.668 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:04.604 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=713506 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 713506 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 713506 ']' 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.604 12:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:04.604 [2024-11-20 12:44:47.622564] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:35:04.604 [2024-11-20 12:44:47.622614] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.604 [2024-11-20 12:44:47.704432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:04.862 [2024-11-20 12:44:47.751834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.862 [2024-11-20 12:44:47.751871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.862 [2024-11-20 12:44:47.751878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.862 [2024-11-20 12:44:47.751884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.862 [2024-11-20 12:44:47.751889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.862 [2024-11-20 12:44:47.753493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.862 [2024-11-20 12:44:47.753600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.862 [2024-11-20 12:44:47.753685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.862 [2024-11-20 12:44:47.753685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.426 12:44:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.684 ************************************ 00:35:05.684 START TEST spdk_target_abort 00:35:05.684 ************************************ 00:35:05.684 12:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:05.684 12:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:05.684 12:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:05.684 12:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.684 12:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.984 spdk_targetn1 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.984 [2024-11-20 12:44:51.378220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.984 [2024-11-20 12:44:51.435365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.984 12:44:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.260 Initializing NVMe Controllers 00:35:12.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:12.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:12.260 Initialization complete. Launching workers. 00:35:12.260 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16890, failed: 0 00:35:12.260 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1526, failed to submit 15364 00:35:12.260 success 759, unsuccessful 767, failed 0 00:35:12.260 12:44:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.260 12:44:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.534 Initializing NVMe Controllers 00:35:15.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.534 Initialization complete. Launching workers. 00:35:15.534 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8548, failed: 0 00:35:15.535 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7303 00:35:15.535 success 311, unsuccessful 934, failed 0 00:35:15.535 12:44:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.535 12:44:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.809 Initializing NVMe Controllers 00:35:18.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.809 Initialization complete. Launching workers. 00:35:18.809 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37825, failed: 0 00:35:18.809 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2819, failed to submit 35006 00:35:18.809 success 610, unsuccessful 2209, failed 0 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.809 12:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:19.374 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.374 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 713506 00:35:19.374 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 713506 ']' 00:35:19.374 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 713506 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713506 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713506' 00:35:19.633 killing process with pid 713506 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 713506 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 713506 00:35:19.633 00:35:19.633 real 0m14.163s 00:35:19.633 user 0m56.302s 00:35:19.633 sys 0m2.671s 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.633 12:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:19.633 ************************************ 00:35:19.633 END TEST spdk_target_abort 00:35:19.633 ************************************ 00:35:19.633 12:45:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:19.633 12:45:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:19.633 12:45:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.633 12:45:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:19.896 ************************************ 00:35:19.896 START TEST kernel_target_abort 00:35:19.896 ************************************ 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:19.896 12:45:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:22.433 Waiting for block devices as requested 00:35:22.693 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:22.693 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:22.693 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:22.952 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:22.952 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:22.952 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.212 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:23.212 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.212 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.212 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:23.472 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:23.472 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:23.472 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:23.731 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.731 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:23.731 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.731 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:23.990 No valid GPT data, bailing 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.990 12:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:23.991 00:35:23.991 Discovery Log Number of Records 2, Generation counter 2 00:35:23.991 =====Discovery Log Entry 0====== 00:35:23.991 trtype: tcp 00:35:23.991 adrfam: ipv4 00:35:23.991 subtype: current discovery subsystem 00:35:23.991 treq: not specified, sq flow control disable supported 00:35:23.991 portid: 1 00:35:23.991 trsvcid: 4420 00:35:23.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:23.991 traddr: 10.0.0.1 00:35:23.991 eflags: none 00:35:23.991 sectype: none 00:35:23.991 =====Discovery Log Entry 1====== 00:35:23.991 trtype: tcp 00:35:23.991 adrfam: ipv4 00:35:23.991 subtype: nvme subsystem 00:35:23.991 treq: not specified, sq flow control disable supported 00:35:23.991 portid: 1 00:35:23.991 trsvcid: 4420 00:35:23.991 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:23.991 traddr: 10.0.0.1 00:35:23.991 eflags: none 00:35:23.991 sectype: none 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.991 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:24.249 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.249 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.249 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:24.249 12:45:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.531 Initializing NVMe Controllers 00:35:27.531 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.531 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.531 Initialization complete. Launching workers. 00:35:27.531 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90565, failed: 0 00:35:27.531 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90565, failed to submit 0 00:35:27.531 success 0, unsuccessful 90565, failed 0 00:35:27.531 12:45:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.531 12:45:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.809 Initializing NVMe Controllers 00:35:30.809 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.809 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.809 Initialization complete. Launching workers. 00:35:30.809 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 144534, failed: 0 00:35:30.809 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36166, failed to submit 108368 00:35:30.809 success 0, unsuccessful 36166, failed 0 00:35:30.809 12:45:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:30.809 12:45:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.340 Initializing NVMe Controllers 00:35:33.340 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:33.340 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:33.340 Initialization complete. Launching workers. 00:35:33.340 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137462, failed: 0 00:35:33.340 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34442, failed to submit 103020 00:35:33.340 success 0, unsuccessful 34442, failed 0 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:33.599 12:45:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.170 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.170 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.428 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.366 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.366 00:35:37.366 real 0m17.576s 00:35:37.366 user 0m9.172s 00:35:37.366 sys 0m5.048s 00:35:37.366 12:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.366 12:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.366 ************************************ 00:35:37.366 END TEST kernel_target_abort 00:35:37.366 ************************************ 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.366 rmmod nvme_tcp 00:35:37.366 rmmod nvme_fabrics 00:35:37.366 rmmod nvme_keyring 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 713506 ']' 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 713506 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 713506 ']' 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 713506 00:35:37.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (713506) - No such process 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 713506 is not found' 00:35:37.366 Process with pid 713506 is not found 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:37.366 12:45:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:40.656 Waiting for block devices as requested 00:35:40.656 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:40.657 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:40.657 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:40.657 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:40.657 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:40.657 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:40.657 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:40.916 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:40.916 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:40.916 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:40.916 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:41.176 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:41.176 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:41.176 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:41.436 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:41.436 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:41.436 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:41.696 12:45:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.602 12:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.602 00:35:43.602 real 0m48.975s 00:35:43.602 user 1m9.943s 00:35:43.602 sys 0m16.490s 00:35:43.602 12:45:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.602 12:45:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.602 ************************************ 00:35:43.602 END TEST nvmf_abort_qd_sizes 00:35:43.602 ************************************ 00:35:43.602 12:45:26 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:43.602 12:45:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:43.602 12:45:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.602 12:45:26 -- common/autotest_common.sh@10 -- # set +x 00:35:43.862 ************************************ 00:35:43.862 START TEST keyring_file 00:35:43.862 ************************************ 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:43.862 * Looking for test storage... 00:35:43.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.862 12:45:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.862 --rc genhtml_branch_coverage=1 00:35:43.862 --rc genhtml_function_coverage=1 00:35:43.862 --rc genhtml_legend=1 00:35:43.862 --rc geninfo_all_blocks=1 00:35:43.862 --rc geninfo_unexecuted_blocks=1 00:35:43.862 00:35:43.862 ' 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.862 --rc genhtml_branch_coverage=1 00:35:43.862 --rc genhtml_function_coverage=1 00:35:43.862 --rc genhtml_legend=1 00:35:43.862 --rc geninfo_all_blocks=1 00:35:43.862 --rc geninfo_unexecuted_blocks=1 00:35:43.862 00:35:43.862 ' 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.862 --rc genhtml_branch_coverage=1 00:35:43.862 --rc genhtml_function_coverage=1 00:35:43.862 --rc genhtml_legend=1 00:35:43.862 --rc geninfo_all_blocks=1 00:35:43.862 --rc geninfo_unexecuted_blocks=1 00:35:43.862 00:35:43.862 ' 00:35:43.862 12:45:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.862 --rc genhtml_branch_coverage=1 00:35:43.862 --rc genhtml_function_coverage=1 00:35:43.862 --rc genhtml_legend=1 00:35:43.862 --rc geninfo_all_blocks=1 00:35:43.862 --rc geninfo_unexecuted_blocks=1 00:35:43.862 00:35:43.862 ' 00:35:43.862 12:45:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:43.862 12:45:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.862 12:45:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.863 12:45:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.863 12:45:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.863 12:45:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.863 12:45:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.863 12:45:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.863 12:45:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.863 12:45:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.863 12:45:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:43.863 12:45:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:43.863 12:45:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:43.863 12:45:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:43.863 12:45:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:43.863 12:45:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:43.863 12:45:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:43.863 12:45:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.B0YoE3kynV 00:35:43.863 12:45:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:43.863 12:45:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:44.121 12:45:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.B0YoE3kynV 00:35:44.121 12:45:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.B0YoE3kynV 00:35:44.121 12:45:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.B0YoE3kynV 00:35:44.121 12:45:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:44.121 12:45:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:44.121 12:45:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:44.121 12:45:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:44.122 12:45:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:44.122 12:45:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:44.122 12:45:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wudhfqTXNK 00:35:44.122 12:45:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:44.122 12:45:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:44.122 12:45:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:44.122 12:45:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:44.122 12:45:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:44.122 12:45:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:44.122 12:45:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:44.122 12:45:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wudhfqTXNK 00:35:44.122 12:45:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wudhfqTXNK 00:35:44.122 12:45:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.wudhfqTXNK 00:35:44.122 12:45:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=722804 00:35:44.122 12:45:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:44.122 12:45:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 722804 00:35:44.122 12:45:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 722804 ']' 00:35:44.122 12:45:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.122 12:45:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.122 12:45:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.122 12:45:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.122 12:45:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.122 [2024-11-20 12:45:27.094528] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:35:44.122 [2024-11-20 12:45:27.094578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722804 ] 00:35:44.122 [2024-11-20 12:45:27.170470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.122 [2024-11-20 12:45:27.212747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:45.056 12:45:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.056 [2024-11-20 12:45:27.931706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.056 null0 00:35:45.056 [2024-11-20 12:45:27.963757] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:45.056 [2024-11-20 12:45:27.964084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.056 12:45:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.056 12:45:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.057 [2024-11-20 12:45:27.991819] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:45.057 request: 00:35:45.057 { 00:35:45.057 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.057 "secure_channel": false, 00:35:45.057 "listen_address": { 00:35:45.057 "trtype": "tcp", 00:35:45.057 "traddr": "127.0.0.1", 00:35:45.057 "trsvcid": "4420" 00:35:45.057 }, 00:35:45.057 "method": "nvmf_subsystem_add_listener", 00:35:45.057 "req_id": 1 00:35:45.057 } 00:35:45.057 Got JSON-RPC error response 00:35:45.057 response: 00:35:45.057 { 00:35:45.057 "code": -32602, 00:35:45.057 "message": "Invalid parameters" 00:35:45.057 } 00:35:45.057 12:45:27 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:45.057 12:45:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:45.057 12:45:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:45.057 12:45:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:45.057 12:45:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:45.057 12:45:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=722979 00:35:45.057 12:45:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 722979 /var/tmp/bperf.sock 00:35:45.057 12:45:27 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:45.057 12:45:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 722979 ']' 00:35:45.057 12:45:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:45.057 12:45:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.057 12:45:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:45.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:45.057 12:45:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.057 12:45:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.057 [2024-11-20 12:45:28.045915] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:35:45.057 [2024-11-20 12:45:28.045963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722979 ] 00:35:45.057 [2024-11-20 12:45:28.120899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.057 [2024-11-20 12:45:28.163425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.315 12:45:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.315 12:45:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:45.315 12:45:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:45.315 12:45:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:45.574 12:45:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wudhfqTXNK 00:35:45.574 12:45:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wudhfqTXNK 00:35:45.574 12:45:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:45.574 12:45:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:45.574 12:45:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.574 12:45:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.574 12:45:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.832 12:45:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.B0YoE3kynV == \/\t\m\p\/\t\m\p\.\B\0\Y\o\E\3\k\y\n\V ]] 00:35:45.832 12:45:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:45.832 12:45:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:45.833 12:45:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.833 12:45:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.833 12:45:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.091 12:45:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.wudhfqTXNK == \/\t\m\p\/\t\m\p\.\w\u\d\h\f\q\T\X\N\K ]] 00:35:46.091 12:45:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:46.091 12:45:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.091 12:45:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.091 12:45:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.091 12:45:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.091 12:45:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.349 12:45:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:46.349 12:45:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:46.349 12:45:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:46.349 12:45:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.349 12:45:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.349 12:45:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:46.349 12:45:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.607 12:45:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:46.607 12:45:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.607 12:45:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.607 [2024-11-20 12:45:29.650156] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:46.607 nvme0n1 00:35:46.865 12:45:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.865 12:45:29 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:46.865 12:45:29 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.865 12:45:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.123 12:45:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:47.123 12:45:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:47.381 Running I/O for 1 seconds... 00:35:48.316 18711.00 IOPS, 73.09 MiB/s 00:35:48.316 Latency(us) 00:35:48.316 [2024-11-20T11:45:31.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.316 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:48.316 nvme0n1 : 1.00 18748.61 73.24 0.00 0.00 6813.53 4758.48 16526.47 00:35:48.316 [2024-11-20T11:45:31.432Z] =================================================================================================================== 00:35:48.316 [2024-11-20T11:45:31.432Z] Total : 18748.61 73.24 0.00 0.00 6813.53 4758.48 16526.47 00:35:48.316 { 00:35:48.316 "results": [ 00:35:48.316 { 00:35:48.316 "job": "nvme0n1", 00:35:48.316 "core_mask": "0x2", 00:35:48.316 "workload": "randrw", 00:35:48.316 "percentage": 50, 00:35:48.316 "status": "finished", 00:35:48.316 "queue_depth": 128, 00:35:48.316 "io_size": 4096, 00:35:48.316 "runtime": 1.004821, 00:35:48.316 "iops": 18748.612937030575, 00:35:48.316 "mibps": 73.23676928527568, 00:35:48.316 "io_failed": 0, 00:35:48.316 "io_timeout": 0, 00:35:48.316 "avg_latency_us": 6813.528495789263, 00:35:48.316 "min_latency_us": 4758.48347826087, 00:35:48.316 "max_latency_us": 16526.46956521739 00:35:48.316 } 00:35:48.316 ], 00:35:48.316 "core_count": 1 00:35:48.316 } 00:35:48.316 12:45:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:48.316 12:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:48.574 12:45:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.574 12:45:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:48.574 12:45:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.574 12:45:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:48.831 12:45:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:48.831 12:45:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:48.831 12:45:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.831 12:45:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.089 [2024-11-20 12:45:32.056273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:49.089 [2024-11-20 12:45:32.056485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcfd00 (107): Transport endpoint is not connected 00:35:49.089 [2024-11-20 12:45:32.057479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcfd00 (9): Bad file descriptor 00:35:49.089 [2024-11-20 12:45:32.058481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:49.089 [2024-11-20 12:45:32.058491] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:49.089 [2024-11-20 12:45:32.058498] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:49.089 [2024-11-20 12:45:32.058506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:49.089 request: 00:35:49.089 { 00:35:49.089 "name": "nvme0", 00:35:49.089 "trtype": "tcp", 00:35:49.089 "traddr": "127.0.0.1", 00:35:49.089 "adrfam": "ipv4", 00:35:49.089 "trsvcid": "4420", 00:35:49.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.089 "prchk_reftag": false, 00:35:49.089 "prchk_guard": false, 00:35:49.089 "hdgst": false, 00:35:49.089 "ddgst": false, 00:35:49.089 "psk": "key1", 00:35:49.089 "allow_unrecognized_csi": false, 00:35:49.089 "method": "bdev_nvme_attach_controller", 00:35:49.089 "req_id": 1 00:35:49.089 } 00:35:49.089 Got JSON-RPC error response 00:35:49.089 response: 00:35:49.089 { 00:35:49.089 "code": -5, 00:35:49.089 "message": "Input/output error" 00:35:49.089 } 00:35:49.089 12:45:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:49.089 12:45:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:49.089 12:45:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:49.089 12:45:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:49.089 12:45:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:49.089 12:45:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.089 12:45:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.090 12:45:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.090 12:45:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.090 12:45:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.347 12:45:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:49.347 12:45:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:49.347 12:45:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:49.347 12:45:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.347 12:45:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.347 12:45:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:49.347 12:45:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.605 12:45:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:49.605 12:45:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:49.605 12:45:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:49.605 12:45:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:49.605 12:45:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:49.863 12:45:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:49.863 12:45:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:49.863 12:45:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.120 12:45:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:50.121 12:45:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.B0YoE3kynV 00:35:50.121 12:45:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.121 12:45:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:50.121 12:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:50.379 [2024-11-20 12:45:33.240630] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.B0YoE3kynV': 0100660 00:35:50.379 [2024-11-20 12:45:33.240658] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:50.379 request: 00:35:50.379 { 00:35:50.379 "name": "key0", 00:35:50.379 "path": "/tmp/tmp.B0YoE3kynV", 00:35:50.379 "method": "keyring_file_add_key", 00:35:50.379 "req_id": 1 00:35:50.379 } 00:35:50.379 Got JSON-RPC error response 00:35:50.379 response: 00:35:50.379 { 00:35:50.379 "code": -1, 00:35:50.379 "message": "Operation not permitted" 00:35:50.379 } 00:35:50.379 12:45:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:50.379 12:45:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:50.379 12:45:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:50.379 12:45:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:50.379 12:45:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.B0YoE3kynV 00:35:50.379 12:45:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:50.379 12:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B0YoE3kynV 00:35:50.379 12:45:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.B0YoE3kynV 00:35:50.379 12:45:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:50.379 12:45:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.379 12:45:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.379 12:45:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.379 12:45:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.379 12:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.638 12:45:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:50.638 12:45:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.638 12:45:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.638 12:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.896 [2024-11-20 12:45:33.854278] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.B0YoE3kynV': No such file or directory 00:35:50.896 [2024-11-20 12:45:33.854303] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:50.896 [2024-11-20 12:45:33.854318] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:50.896 [2024-11-20 12:45:33.854341] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:50.896 [2024-11-20 12:45:33.854349] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:50.896 [2024-11-20 12:45:33.854355] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:50.896 request: 00:35:50.896 { 00:35:50.896 "name": "nvme0", 00:35:50.896 "trtype": "tcp", 00:35:50.896 "traddr": "127.0.0.1", 00:35:50.896 "adrfam": "ipv4", 00:35:50.896 "trsvcid": "4420", 00:35:50.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.896 "prchk_reftag": false, 00:35:50.896 "prchk_guard": false, 00:35:50.896 "hdgst": false, 00:35:50.896 "ddgst": false, 00:35:50.896 "psk": "key0", 00:35:50.896 "allow_unrecognized_csi": false, 00:35:50.896 "method": "bdev_nvme_attach_controller", 00:35:50.896 "req_id": 1 00:35:50.896 } 00:35:50.896 Got JSON-RPC error response 00:35:50.896 response: 00:35:50.896 { 00:35:50.896 "code": -19, 00:35:50.896 "message": "No such device" 00:35:50.896 } 00:35:50.896 12:45:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:50.896 12:45:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:50.896 12:45:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:50.896 12:45:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:50.896 12:45:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.896 12:45:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:51.154 12:45:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lv0OCQvmon 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:51.154 12:45:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:51.154 12:45:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:51.154 12:45:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:51.154 12:45:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:51.154 12:45:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:51.154 12:45:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lv0OCQvmon 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lv0OCQvmon 00:35:51.154 12:45:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.lv0OCQvmon 00:35:51.154 12:45:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lv0OCQvmon 00:35:51.154 12:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lv0OCQvmon 00:35:51.412 12:45:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.412 12:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.669 nvme0n1 00:35:51.669 12:45:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:51.670 12:45:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.670 12:45:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.670 12:45:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.670 12:45:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.670 12:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.927 12:45:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:51.927 12:45:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:51.927 12:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:51.927 12:45:34 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:51.927 12:45:34 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:51.927 12:45:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.927 12:45:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.927 12:45:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.185 12:45:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:52.185 12:45:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:52.185 12:45:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.185 12:45:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.185 12:45:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.185 12:45:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.185 12:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.442 12:45:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:52.442 12:45:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:52.442 12:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:52.700 12:45:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:52.700 12:45:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:52.700 12:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.700 12:45:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:52.700 12:45:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lv0OCQvmon 00:35:52.700 12:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lv0OCQvmon 00:35:52.958 12:45:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wudhfqTXNK 00:35:52.958 12:45:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wudhfqTXNK 00:35:53.216 12:45:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:53.216 12:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:53.474 nvme0n1 00:35:53.474 12:45:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:53.474 12:45:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:53.733 12:45:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:53.733 "subsystems": [ 00:35:53.733 { 00:35:53.733 "subsystem": "keyring", 00:35:53.733 "config": [ 00:35:53.733 { 00:35:53.733 "method": "keyring_file_add_key", 00:35:53.733 "params": { 00:35:53.733 "name": "key0", 00:35:53.733 "path": "/tmp/tmp.lv0OCQvmon" 00:35:53.733 } 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "method": "keyring_file_add_key", 00:35:53.733 "params": { 00:35:53.733 "name": "key1", 00:35:53.733 "path": "/tmp/tmp.wudhfqTXNK" 00:35:53.733 } 00:35:53.733 } 00:35:53.733 ] 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "subsystem": "iobuf", 00:35:53.733 "config": [ 00:35:53.733 { 00:35:53.733 "method": "iobuf_set_options", 00:35:53.733 "params": { 00:35:53.733 "small_pool_count": 8192, 00:35:53.733 "large_pool_count": 1024, 00:35:53.733 "small_bufsize": 8192, 00:35:53.733 "large_bufsize": 135168, 00:35:53.733 "enable_numa": false 00:35:53.733 } 00:35:53.733 } 00:35:53.733 ] 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "subsystem": "sock", 00:35:53.733 "config": [ 00:35:53.733 { 00:35:53.733 "method": "sock_set_default_impl", 00:35:53.733 "params": { 00:35:53.733 "impl_name": "posix" 00:35:53.733 } 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "method": "sock_impl_set_options", 00:35:53.733 "params": { 00:35:53.733 "impl_name": "ssl", 00:35:53.733 "recv_buf_size": 4096, 00:35:53.733 "send_buf_size": 4096, 00:35:53.733 "enable_recv_pipe": true, 00:35:53.733 "enable_quickack": false, 00:35:53.733 "enable_placement_id": 0, 00:35:53.733 "enable_zerocopy_send_server": true, 00:35:53.733 "enable_zerocopy_send_client": false, 00:35:53.733 "zerocopy_threshold": 0, 00:35:53.733 "tls_version": 0, 00:35:53.733 "enable_ktls": false 00:35:53.733 } 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "method": "sock_impl_set_options", 00:35:53.733 "params": { 00:35:53.733 "impl_name": "posix", 00:35:53.733 "recv_buf_size": 2097152, 00:35:53.733 "send_buf_size": 2097152, 00:35:53.733 "enable_recv_pipe": true, 00:35:53.733 "enable_quickack": false, 00:35:53.733 "enable_placement_id": 0, 00:35:53.733 "enable_zerocopy_send_server": true, 00:35:53.733 "enable_zerocopy_send_client": false, 00:35:53.733 "zerocopy_threshold": 0, 00:35:53.733 "tls_version": 0, 00:35:53.733 "enable_ktls": false 00:35:53.733 } 00:35:53.733 } 00:35:53.733 ] 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "subsystem": "vmd", 00:35:53.733 "config": [] 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "subsystem": "accel", 00:35:53.733 "config": [ 00:35:53.733 { 00:35:53.733 "method": "accel_set_options", 00:35:53.733 "params": { 00:35:53.733 "small_cache_size": 128, 00:35:53.733 "large_cache_size": 16, 00:35:53.733 "task_count": 2048, 00:35:53.733 "sequence_count": 2048, 00:35:53.733 "buf_count": 2048 00:35:53.733 } 00:35:53.733 } 00:35:53.733 ] 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "subsystem": "bdev", 00:35:53.733 "config": [ 00:35:53.733 { 00:35:53.733 "method": "bdev_set_options", 00:35:53.733 "params": { 00:35:53.733 "bdev_io_pool_size": 65535, 00:35:53.733 "bdev_io_cache_size": 256, 00:35:53.733 "bdev_auto_examine": true, 00:35:53.733 "iobuf_small_cache_size": 128, 00:35:53.733 "iobuf_large_cache_size": 16 00:35:53.733 } 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "method": "bdev_raid_set_options", 00:35:53.733 "params": { 00:35:53.733 "process_window_size_kb": 1024, 00:35:53.733 "process_max_bandwidth_mb_sec": 0 00:35:53.733 } 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "method": "bdev_iscsi_set_options", 00:35:53.733 "params": { 00:35:53.733 "timeout_sec": 30 00:35:53.733 } 00:35:53.733 }, 00:35:53.733 { 00:35:53.733 "method": "bdev_nvme_set_options", 00:35:53.733 "params": { 00:35:53.733 "action_on_timeout": "none", 00:35:53.733 "timeout_us": 0, 00:35:53.733 "timeout_admin_us": 0, 00:35:53.733 "keep_alive_timeout_ms": 10000, 00:35:53.733 "arbitration_burst": 0, 00:35:53.733 "low_priority_weight": 0, 00:35:53.733 "medium_priority_weight": 0, 00:35:53.733 "high_priority_weight": 0, 00:35:53.734 "nvme_adminq_poll_period_us": 10000, 00:35:53.734 "nvme_ioq_poll_period_us": 0, 00:35:53.734 "io_queue_requests": 512, 00:35:53.734 "delay_cmd_submit": true, 00:35:53.734 "transport_retry_count": 4, 00:35:53.734 "bdev_retry_count": 3, 00:35:53.734 "transport_ack_timeout": 0, 00:35:53.734 "ctrlr_loss_timeout_sec": 0, 00:35:53.734 "reconnect_delay_sec": 0, 00:35:53.734 "fast_io_fail_timeout_sec": 0, 00:35:53.734 "disable_auto_failback": false, 00:35:53.734 "generate_uuids": false, 00:35:53.734 "transport_tos": 0, 00:35:53.734 "nvme_error_stat": false, 00:35:53.734 "rdma_srq_size": 0, 00:35:53.734 "io_path_stat": false, 00:35:53.734 "allow_accel_sequence": false, 00:35:53.734 "rdma_max_cq_size": 0, 00:35:53.734 "rdma_cm_event_timeout_ms": 0, 00:35:53.734 "dhchap_digests": [ 00:35:53.734 "sha256", 00:35:53.734 "sha384", 00:35:53.734 "sha512" 00:35:53.734 ], 00:35:53.734 "dhchap_dhgroups": [ 00:35:53.734 "null", 00:35:53.734 "ffdhe2048", 00:35:53.734 "ffdhe3072", 00:35:53.734 "ffdhe4096", 00:35:53.734 "ffdhe6144", 00:35:53.734 "ffdhe8192" 00:35:53.734 ] 00:35:53.734 } 00:35:53.734 }, 00:35:53.734 { 00:35:53.734 "method": "bdev_nvme_attach_controller", 00:35:53.734 "params": { 00:35:53.734 "name": "nvme0", 00:35:53.734 "trtype": "TCP", 00:35:53.734 "adrfam": "IPv4", 00:35:53.734 "traddr": "127.0.0.1", 00:35:53.734 "trsvcid": "4420", 00:35:53.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.734 "prchk_reftag": false, 00:35:53.734 "prchk_guard": false, 00:35:53.734 "ctrlr_loss_timeout_sec": 0, 00:35:53.734 "reconnect_delay_sec": 0, 00:35:53.734 "fast_io_fail_timeout_sec": 0, 00:35:53.734 "psk": "key0", 00:35:53.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.734 "hdgst": false, 00:35:53.734 "ddgst": false, 00:35:53.734 "multipath": "multipath" 00:35:53.734 } 00:35:53.734 }, 00:35:53.734 { 00:35:53.734 "method": "bdev_nvme_set_hotplug", 00:35:53.734 "params": { 00:35:53.734 "period_us": 100000, 00:35:53.734 "enable": false 00:35:53.734 } 00:35:53.734 }, 00:35:53.734 { 00:35:53.734 "method": "bdev_wait_for_examine" 00:35:53.734 } 00:35:53.734 ] 00:35:53.734 }, 00:35:53.734 { 00:35:53.734 "subsystem": "nbd", 00:35:53.734 "config": [] 00:35:53.734 } 00:35:53.734 ] 00:35:53.734 }' 00:35:53.734 12:45:36 keyring_file -- keyring/file.sh@115 -- # killprocess 722979 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 722979 ']' 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 722979 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722979 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722979' 00:35:53.734 killing process with pid 722979 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@973 -- # kill 722979 00:35:53.734 Received shutdown signal, test time was about 1.000000 seconds 00:35:53.734 00:35:53.734 Latency(us) 00:35:53.734 [2024-11-20T11:45:36.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.734 [2024-11-20T11:45:36.850Z] =================================================================================================================== 00:35:53.734 [2024-11-20T11:45:36.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.734 12:45:36 keyring_file -- common/autotest_common.sh@978 -- # wait 722979 00:35:53.993 12:45:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=724558 00:35:53.993 12:45:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 724558 /var/tmp/bperf.sock 00:35:53.993 12:45:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 724558 ']' 00:35:53.993 12:45:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.993 12:45:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:53.993 12:45:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.993 12:45:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.993 12:45:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:53.993 "subsystems": [ 00:35:53.993 { 00:35:53.993 "subsystem": "keyring", 00:35:53.993 "config": [ 00:35:53.993 { 00:35:53.993 "method": "keyring_file_add_key", 00:35:53.993 "params": { 00:35:53.993 "name": "key0", 00:35:53.993 "path": "/tmp/tmp.lv0OCQvmon" 00:35:53.993 } 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "method": "keyring_file_add_key", 00:35:53.993 "params": { 00:35:53.993 "name": "key1", 00:35:53.993 "path": "/tmp/tmp.wudhfqTXNK" 00:35:53.993 } 00:35:53.993 } 00:35:53.993 ] 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "subsystem": "iobuf", 00:35:53.993 "config": [ 00:35:53.993 { 00:35:53.993 "method": "iobuf_set_options", 00:35:53.993 "params": { 00:35:53.993 "small_pool_count": 8192, 00:35:53.993 "large_pool_count": 1024, 00:35:53.993 "small_bufsize": 8192, 00:35:53.993 "large_bufsize": 135168, 00:35:53.993 "enable_numa": false 00:35:53.993 } 00:35:53.993 } 00:35:53.993 ] 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "subsystem": "sock", 00:35:53.993 "config": [ 00:35:53.993 { 00:35:53.993 "method": "sock_set_default_impl", 00:35:53.993 "params": { 00:35:53.993 "impl_name": "posix" 00:35:53.993 } 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "method": "sock_impl_set_options", 00:35:53.993 "params": { 00:35:53.993 "impl_name": "ssl", 00:35:53.993 "recv_buf_size": 4096, 00:35:53.993 "send_buf_size": 4096, 00:35:53.993 "enable_recv_pipe": true, 00:35:53.993 "enable_quickack": false, 00:35:53.993 "enable_placement_id": 0, 00:35:53.993 "enable_zerocopy_send_server": true, 00:35:53.993 "enable_zerocopy_send_client": false, 00:35:53.993 "zerocopy_threshold": 0, 00:35:53.993 "tls_version": 0, 00:35:53.993 "enable_ktls": false 00:35:53.993 } 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "method": "sock_impl_set_options", 00:35:53.993 "params": { 00:35:53.993 "impl_name": "posix", 00:35:53.993 "recv_buf_size": 2097152, 00:35:53.993 "send_buf_size": 2097152, 00:35:53.993 "enable_recv_pipe": true, 00:35:53.993 "enable_quickack": false, 00:35:53.993 "enable_placement_id": 0, 00:35:53.993 "enable_zerocopy_send_server": true, 00:35:53.993 "enable_zerocopy_send_client": false, 00:35:53.993 "zerocopy_threshold": 0, 00:35:53.993 "tls_version": 0, 00:35:53.993 "enable_ktls": false 00:35:53.993 } 00:35:53.993 } 00:35:53.993 ] 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "subsystem": "vmd", 00:35:53.993 "config": [] 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "subsystem": "accel", 00:35:53.993 "config": [ 00:35:53.993 { 00:35:53.993 "method": "accel_set_options", 00:35:53.993 "params": { 00:35:53.993 "small_cache_size": 128, 00:35:53.993 "large_cache_size": 16, 00:35:53.993 "task_count": 2048, 00:35:53.993 "sequence_count": 2048, 00:35:53.993 "buf_count": 2048 00:35:53.993 } 00:35:53.993 } 00:35:53.993 ] 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "subsystem": "bdev", 00:35:53.993 "config": [ 00:35:53.993 { 00:35:53.993 "method": "bdev_set_options", 00:35:53.993 "params": { 00:35:53.993 "bdev_io_pool_size": 65535, 00:35:53.993 "bdev_io_cache_size": 256, 00:35:53.993 "bdev_auto_examine": true, 00:35:53.993 "iobuf_small_cache_size": 128, 00:35:53.993 "iobuf_large_cache_size": 16 00:35:53.993 } 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "method": "bdev_raid_set_options", 00:35:53.993 "params": { 00:35:53.993 "process_window_size_kb": 1024, 00:35:53.993 "process_max_bandwidth_mb_sec": 0 00:35:53.993 } 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "method": "bdev_iscsi_set_options", 00:35:53.993 "params": { 00:35:53.993 "timeout_sec": 30 00:35:53.993 } 00:35:53.993 }, 00:35:53.993 { 00:35:53.993 "method": "bdev_nvme_set_options", 00:35:53.993 "params": { 00:35:53.993 "action_on_timeout": "none", 00:35:53.993 "timeout_us": 0, 00:35:53.993 "timeout_admin_us": 0, 00:35:53.993 "keep_alive_timeout_ms": 10000, 00:35:53.993 "arbitration_burst": 0, 00:35:53.993 "low_priority_weight": 0, 00:35:53.993 "medium_priority_weight": 0, 00:35:53.993 "high_priority_weight": 0, 00:35:53.993 "nvme_adminq_poll_period_us": 10000, 00:35:53.993 "nvme_ioq_poll_period_us": 0, 00:35:53.993 "io_queue_requests": 512, 00:35:53.993 "delay_cmd_submit": true, 00:35:53.993 "transport_retry_count": 4, 00:35:53.993 "bdev_retry_count": 3, 00:35:53.994 "transport_ack_timeout": 0, 00:35:53.994 "ctrlr_loss_timeout_sec": 0, 00:35:53.994 "reconnect_delay_sec": 0, 00:35:53.994 "fast_io_fail_timeout_sec": 0, 00:35:53.994 "disable_auto_failback": false, 00:35:53.994 "generate_uuids": false, 00:35:53.994 "transport_tos": 0, 00:35:53.994 "nvme_error_stat": false, 00:35:53.994 "rdma_srq_size": 0, 00:35:53.994 "io_path_stat": false, 00:35:53.994 "allow_accel_sequence": false, 00:35:53.994 "rdma_max_cq_size": 0, 00:35:53.994 "rdma_cm_event_timeout_ms": 0, 00:35:53.994 "dhchap_digests": [ 00:35:53.994 "sha256", 00:35:53.994 "sha384", 00:35:53.994 "sha512" 00:35:53.994 ], 00:35:53.994 "dhchap_dhgroups": [ 00:35:53.994 "null", 00:35:53.994 "ffdhe2048", 00:35:53.994 "ffdhe3072", 00:35:53.994 "ffdhe4096", 00:35:53.994 "ffdhe6144", 00:35:53.994 "ffdhe8192" 00:35:53.994 ] 00:35:53.994 } 00:35:53.994 }, 00:35:53.994 { 00:35:53.994 "method": "bdev_nvme_attach_controller", 00:35:53.994 "params": { 00:35:53.994 "name": "nvme0", 00:35:53.994 "trtype": "TCP", 00:35:53.994 "adrfam": "IPv4", 00:35:53.994 "traddr": "127.0.0.1", 00:35:53.994 "trsvcid": "4420", 00:35:53.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.994 "prchk_reftag": false, 00:35:53.994 "prchk_guard": false, 00:35:53.994 "ctrlr_loss_timeout_sec": 0, 00:35:53.994 "reconnect_delay_sec": 0, 00:35:53.994 "fast_io_fail_timeout_sec": 0, 00:35:53.994 "psk": "key0", 00:35:53.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.994 "hdgst": false, 00:35:53.994 "ddgst": false, 00:35:53.994 "multipath": "multipath" 00:35:53.994 } 00:35:53.994 }, 00:35:53.994 { 00:35:53.994 "method": "bdev_nvme_set_hotplug", 00:35:53.994 "params": { 00:35:53.994 "period_us": 100000, 00:35:53.994 "enable": false 00:35:53.994 } 00:35:53.994 }, 00:35:53.994 { 00:35:53.994 "method": "bdev_wait_for_examine" 00:35:53.994 } 00:35:53.994 ] 00:35:53.994 }, 00:35:53.994 { 00:35:53.994 "subsystem": "nbd", 00:35:53.994 "config": [] 00:35:53.994 } 00:35:53.994 ] 00:35:53.994 }' 00:35:53.994 12:45:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.994 12:45:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:53.994 [2024-11-20 12:45:36.949688] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:35:53.994 [2024-11-20 12:45:36.949738] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724558 ] 00:35:53.994 [2024-11-20 12:45:37.024169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.994 [2024-11-20 12:45:37.067046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.252 [2024-11-20 12:45:37.227325] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:54.817 12:45:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.817 12:45:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:54.817 12:45:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:54.817 12:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.817 12:45:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:55.075 12:45:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:55.075 12:45:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:55.075 12:45:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.076 12:45:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.076 12:45:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.076 12:45:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.076 12:45:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.076 12:45:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:55.076 12:45:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:55.076 12:45:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:55.076 12:45:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.333 12:45:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.333 12:45:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.333 12:45:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.333 12:45:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:55.333 12:45:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:55.333 12:45:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:55.333 12:45:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:55.592 12:45:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:55.592 12:45:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:55.592 12:45:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lv0OCQvmon /tmp/tmp.wudhfqTXNK 00:35:55.592 12:45:38 keyring_file -- keyring/file.sh@20 -- # killprocess 724558 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 724558 ']' 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 724558 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 724558 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 724558' 00:35:55.592 killing process with pid 724558 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@973 -- # kill 724558 00:35:55.592 Received shutdown signal, test time was about 1.000000 seconds 00:35:55.592 00:35:55.592 Latency(us) 00:35:55.592 [2024-11-20T11:45:38.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.592 [2024-11-20T11:45:38.708Z] =================================================================================================================== 00:35:55.592 [2024-11-20T11:45:38.708Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:55.592 12:45:38 keyring_file -- common/autotest_common.sh@978 -- # wait 724558 00:35:55.851 12:45:38 keyring_file -- keyring/file.sh@21 -- # killprocess 722804 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 722804 ']' 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 722804 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722804 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722804' 00:35:55.851 killing process with pid 722804 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@973 -- # kill 722804 00:35:55.851 12:45:38 keyring_file -- common/autotest_common.sh@978 -- # wait 722804 00:35:56.111 00:35:56.111 real 0m12.449s 00:35:56.111 user 0m30.426s 00:35:56.111 sys 0m2.683s 00:35:56.111 12:45:39 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.111 12:45:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:56.111 ************************************ 00:35:56.111 END TEST keyring_file 00:35:56.111 ************************************ 00:35:56.111 12:45:39 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:56.111 12:45:39 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.111 12:45:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:56.111 12:45:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.111 12:45:39 -- common/autotest_common.sh@10 -- # set +x 00:35:56.370 ************************************ 00:35:56.370 START TEST keyring_linux 00:35:56.370 ************************************ 00:35:56.370 12:45:39 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.370 Joined session keyring: 987919156 00:35:56.370 * Looking for test storage... 00:35:56.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:56.370 12:45:39 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:56.370 12:45:39 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:56.370 12:45:39 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:56.370 12:45:39 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.370 12:45:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:56.371 12:45:39 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.371 12:45:39 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:56.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.371 --rc genhtml_branch_coverage=1 00:35:56.371 --rc genhtml_function_coverage=1 00:35:56.371 --rc genhtml_legend=1 00:35:56.371 --rc geninfo_all_blocks=1 00:35:56.371 --rc geninfo_unexecuted_blocks=1 00:35:56.371 00:35:56.371 ' 00:35:56.371 12:45:39 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:56.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.371 --rc genhtml_branch_coverage=1 00:35:56.371 --rc genhtml_function_coverage=1 00:35:56.371 --rc genhtml_legend=1 00:35:56.371 --rc geninfo_all_blocks=1 00:35:56.371 --rc geninfo_unexecuted_blocks=1 00:35:56.371 00:35:56.371 ' 00:35:56.371 12:45:39 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:56.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.371 --rc genhtml_branch_coverage=1 00:35:56.371 --rc genhtml_function_coverage=1 00:35:56.371 --rc genhtml_legend=1 00:35:56.371 --rc geninfo_all_blocks=1 00:35:56.371 --rc geninfo_unexecuted_blocks=1 00:35:56.371 00:35:56.371 ' 00:35:56.371 12:45:39 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:56.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.371 --rc genhtml_branch_coverage=1 00:35:56.371 --rc genhtml_function_coverage=1 00:35:56.371 --rc genhtml_legend=1 00:35:56.371 --rc geninfo_all_blocks=1 00:35:56.371 --rc geninfo_unexecuted_blocks=1 00:35:56.371 00:35:56.371 ' 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.371 12:45:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.371 12:45:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.371 12:45:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.371 12:45:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.371 12:45:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:56.371 12:45:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:56.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:56.371 12:45:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:56.371 12:45:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:56.371 12:45:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:56.630 /tmp/:spdk-test:key0 00:35:56.630 12:45:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:56.630 12:45:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:56.630 12:45:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:56.630 12:45:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:56.630 12:45:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:56.630 12:45:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:56.630 12:45:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:56.630 12:45:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:56.630 /tmp/:spdk-test:key1 00:35:56.630 12:45:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=725013 00:35:56.630 12:45:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 725013 00:35:56.630 12:45:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:56.630 12:45:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 725013 ']' 00:35:56.630 12:45:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.630 12:45:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.630 12:45:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.630 12:45:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.630 12:45:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.630 [2024-11-20 12:45:39.598284] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:35:56.630 [2024-11-20 12:45:39.598339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725013 ] 00:35:56.630 [2024-11-20 12:45:39.674119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.630 [2024-11-20 12:45:39.718124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.889 12:45:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.889 12:45:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:56.889 12:45:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:56.889 12:45:39 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.889 12:45:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.889 [2024-11-20 12:45:39.942941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.889 null0 00:35:56.889 [2024-11-20 12:45:39.974998] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:56.889 [2024-11-20 12:45:39.975347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:56.889 12:45:39 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.889 12:45:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:56.889 745989418 00:35:56.889 12:45:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:56.889 517943876 00:35:56.889 12:45:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=725121 00:35:56.889 12:45:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:56.889 12:45:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 725121 /var/tmp/bperf.sock 00:35:56.889 12:45:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 725121 ']' 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.147 [2024-11-20 12:45:40.046664] Starting SPDK v25.01-pre git sha1 0383e688b / DPDK 24.03.0 initialization... 00:35:57.147 [2024-11-20 12:45:40.046713] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725121 ] 00:35:57.147 [2024-11-20 12:45:40.119637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.147 [2024-11-20 12:45:40.162473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.147 12:45:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:57.147 12:45:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:57.147 12:45:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:57.405 12:45:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:57.405 12:45:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:57.662 12:45:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:57.662 12:45:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:57.920 [2024-11-20 12:45:40.835242] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:57.920 nvme0n1 00:35:57.920 12:45:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:57.920 12:45:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:57.920 12:45:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:57.920 12:45:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:57.920 12:45:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:57.920 12:45:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:58.178 12:45:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:58.178 12:45:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:58.178 12:45:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:58.178 12:45:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:58.178 12:45:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.178 12:45:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:58.178 12:45:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@25 -- # sn=745989418 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 745989418 == \7\4\5\9\8\9\4\1\8 ]] 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 745989418 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:58.437 12:45:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:58.437 Running I/O for 1 seconds... 00:35:59.373 21200.00 IOPS, 82.81 MiB/s 00:35:59.373 Latency(us) 00:35:59.373 [2024-11-20T11:45:42.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.373 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:59.373 nvme0n1 : 1.01 21200.16 82.81 0.00 0.00 6017.66 2322.25 7408.42 00:35:59.373 [2024-11-20T11:45:42.489Z] =================================================================================================================== 00:35:59.373 [2024-11-20T11:45:42.489Z] Total : 21200.16 82.81 0.00 0.00 6017.66 2322.25 7408.42 00:35:59.373 { 00:35:59.373 "results": [ 00:35:59.373 { 00:35:59.373 "job": "nvme0n1", 00:35:59.373 "core_mask": "0x2", 00:35:59.373 "workload": "randread", 00:35:59.373 "status": "finished", 00:35:59.373 "queue_depth": 128, 00:35:59.373 "io_size": 4096, 00:35:59.373 "runtime": 1.00603, 00:35:59.373 "iops": 21200.163017007446, 00:35:59.373 "mibps": 82.81313678518534, 00:35:59.373 "io_failed": 0, 00:35:59.373 "io_timeout": 0, 00:35:59.373 "avg_latency_us": 6017.663803777031, 00:35:59.373 "min_latency_us": 2322.2539130434784, 00:35:59.373 "max_latency_us": 7408.417391304348 00:35:59.373 } 00:35:59.373 ], 00:35:59.373 "core_count": 1 00:35:59.373 } 00:35:59.373 12:45:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:59.373 12:45:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:59.632 12:45:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:59.632 12:45:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:59.632 12:45:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:59.632 12:45:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:59.632 12:45:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:59.632 12:45:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.890 12:45:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:59.890 12:45:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:59.890 12:45:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:59.890 12:45:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:59.890 12:45:42 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.890 12:45:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.149 [2024-11-20 12:45:43.064014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:00.149 [2024-11-20 12:45:43.064333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de1a70 (107): Transport endpoint is not connected 00:36:00.149 [2024-11-20 12:45:43.065329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de1a70 (9): Bad file descriptor 00:36:00.149 [2024-11-20 12:45:43.066330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:00.149 [2024-11-20 12:45:43.066339] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:00.149 [2024-11-20 12:45:43.066346] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:00.149 [2024-11-20 12:45:43.066353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:00.149 request: 00:36:00.149 { 00:36:00.149 "name": "nvme0", 00:36:00.149 "trtype": "tcp", 00:36:00.149 "traddr": "127.0.0.1", 00:36:00.149 "adrfam": "ipv4", 00:36:00.149 "trsvcid": "4420", 00:36:00.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.149 "prchk_reftag": false, 00:36:00.149 "prchk_guard": false, 00:36:00.149 "hdgst": false, 00:36:00.149 "ddgst": false, 00:36:00.149 "psk": ":spdk-test:key1", 00:36:00.149 "allow_unrecognized_csi": false, 00:36:00.149 "method": "bdev_nvme_attach_controller", 00:36:00.149 "req_id": 1 00:36:00.149 } 00:36:00.149 Got JSON-RPC error response 00:36:00.149 response: 00:36:00.149 { 00:36:00.149 "code": -5, 00:36:00.149 "message": "Input/output error" 00:36:00.149 } 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@33 -- # sn=745989418 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 745989418 00:36:00.149 1 links removed 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@33 -- # sn=517943876 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 517943876 00:36:00.149 1 links removed 00:36:00.149 12:45:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 725121 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 725121 ']' 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 725121 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725121 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725121' 00:36:00.149 killing process with pid 725121 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 725121 00:36:00.149 Received shutdown signal, test time was about 1.000000 seconds 00:36:00.149 00:36:00.149 Latency(us) 00:36:00.149 [2024-11-20T11:45:43.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.149 [2024-11-20T11:45:43.265Z] =================================================================================================================== 00:36:00.149 [2024-11-20T11:45:43.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:00.149 12:45:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 725121 00:36:00.408 12:45:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 725013 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 725013 ']' 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 725013 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 725013 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 725013' 00:36:00.408 killing process with pid 725013 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 725013 00:36:00.408 12:45:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 725013 00:36:00.667 00:36:00.667 real 0m4.424s 00:36:00.667 user 0m8.423s 00:36:00.667 sys 0m1.401s 00:36:00.667 12:45:43 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.667 12:45:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:00.667 ************************************ 00:36:00.667 END TEST keyring_linux 00:36:00.667 ************************************ 00:36:00.667 12:45:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:00.667 12:45:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:00.667 12:45:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:00.667 12:45:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:00.667 12:45:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:00.667 12:45:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:00.668 12:45:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:00.668 12:45:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.668 12:45:43 -- common/autotest_common.sh@10 -- # set +x 00:36:00.668 12:45:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:00.668 12:45:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:00.668 12:45:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:00.668 12:45:43 -- common/autotest_common.sh@10 -- # set +x 00:36:05.946 INFO: APP EXITING 00:36:05.946 INFO: killing all VMs 00:36:05.946 INFO: killing vhost app 00:36:05.946 INFO: EXIT DONE 00:36:08.483 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:08.483 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:08.483 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:08.742 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:08.742 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:08.742 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:12.033 Cleaning 00:36:12.033 Removing: /var/run/dpdk/spdk0/config 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:12.033 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:12.033 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:12.033 Removing: /var/run/dpdk/spdk1/config 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:12.033 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:12.033 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:12.033 Removing: /var/run/dpdk/spdk2/config 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:12.033 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:12.033 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:12.033 Removing: /var/run/dpdk/spdk3/config 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:12.033 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:12.033 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:12.033 Removing: /var/run/dpdk/spdk4/config 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:12.033 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:12.033 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:12.033 Removing: /dev/shm/bdev_svc_trace.1 00:36:12.033 Removing: /dev/shm/nvmf_trace.0 00:36:12.033 Removing: /dev/shm/spdk_tgt_trace.pid244644 00:36:12.033 Removing: /var/run/dpdk/spdk0 00:36:12.033 Removing: /var/run/dpdk/spdk1 00:36:12.033 Removing: /var/run/dpdk/spdk2 00:36:12.033 Removing: /var/run/dpdk/spdk3 00:36:12.033 Removing: /var/run/dpdk/spdk4 00:36:12.033 Removing: /var/run/dpdk/spdk_pid242494 00:36:12.033 Removing: /var/run/dpdk/spdk_pid243562 00:36:12.033 Removing: /var/run/dpdk/spdk_pid244644 00:36:12.033 Removing: /var/run/dpdk/spdk_pid245279 00:36:12.033 Removing: /var/run/dpdk/spdk_pid246225 00:36:12.033 Removing: /var/run/dpdk/spdk_pid246251 00:36:12.033 Removing: /var/run/dpdk/spdk_pid247272 00:36:12.033 Removing: /var/run/dpdk/spdk_pid247445 00:36:12.033 Removing: /var/run/dpdk/spdk_pid247733 00:36:12.033 Removing: /var/run/dpdk/spdk_pid249319 00:36:12.033 Removing: /var/run/dpdk/spdk_pid250597 00:36:12.033 Removing: /var/run/dpdk/spdk_pid250922 00:36:12.033 Removing: /var/run/dpdk/spdk_pid251172 00:36:12.034 Removing: /var/run/dpdk/spdk_pid251484 00:36:12.034 Removing: /var/run/dpdk/spdk_pid251774 00:36:12.034 Removing: /var/run/dpdk/spdk_pid252026 00:36:12.034 Removing: /var/run/dpdk/spdk_pid252273 00:36:12.034 Removing: /var/run/dpdk/spdk_pid252558 00:36:12.034 Removing: /var/run/dpdk/spdk_pid253297 00:36:12.034 Removing: /var/run/dpdk/spdk_pid256300 00:36:12.034 Removing: /var/run/dpdk/spdk_pid256556 00:36:12.034 Removing: /var/run/dpdk/spdk_pid256826 00:36:12.034 Removing: /var/run/dpdk/spdk_pid256927 00:36:12.034 Removing: /var/run/dpdk/spdk_pid257451 00:36:12.034 Removing: /var/run/dpdk/spdk_pid257542 00:36:12.034 Removing: /var/run/dpdk/spdk_pid257952 00:36:12.034 Removing: /var/run/dpdk/spdk_pid258178 00:36:12.034 Removing: /var/run/dpdk/spdk_pid258523 00:36:12.034 Removing: /var/run/dpdk/spdk_pid258610 00:36:12.034 Removing: /var/run/dpdk/spdk_pid259066 00:36:12.034 Removing: /var/run/dpdk/spdk_pid259101 00:36:12.034 Removing: /var/run/dpdk/spdk_pid259669 00:36:12.034 Removing: /var/run/dpdk/spdk_pid259916 00:36:12.034 Removing: /var/run/dpdk/spdk_pid260215 00:36:12.034 Removing: /var/run/dpdk/spdk_pid263917 00:36:12.034 Removing: /var/run/dpdk/spdk_pid268186 00:36:12.034 Removing: /var/run/dpdk/spdk_pid278250 00:36:12.034 Removing: /var/run/dpdk/spdk_pid278905 00:36:12.034 Removing: /var/run/dpdk/spdk_pid283180 00:36:12.034 Removing: /var/run/dpdk/spdk_pid283553 00:36:12.034 Removing: /var/run/dpdk/spdk_pid287921 00:36:12.034 Removing: /var/run/dpdk/spdk_pid293827 00:36:12.034 Removing: /var/run/dpdk/spdk_pid296467 00:36:12.034 Removing: /var/run/dpdk/spdk_pid306980 00:36:12.034 Removing: /var/run/dpdk/spdk_pid316433 00:36:12.034 Removing: /var/run/dpdk/spdk_pid318148 00:36:12.034 Removing: /var/run/dpdk/spdk_pid319070 00:36:12.034 Removing: /var/run/dpdk/spdk_pid335938 00:36:12.034 Removing: /var/run/dpdk/spdk_pid340012 00:36:12.034 Removing: /var/run/dpdk/spdk_pid385761 00:36:12.034 Removing: /var/run/dpdk/spdk_pid391010 00:36:12.034 Removing: /var/run/dpdk/spdk_pid396929 00:36:12.034 Removing: /var/run/dpdk/spdk_pid403423 00:36:12.034 Removing: /var/run/dpdk/spdk_pid403428 00:36:12.034 Removing: /var/run/dpdk/spdk_pid404340 00:36:12.034 Removing: /var/run/dpdk/spdk_pid405378 00:36:12.034 Removing: /var/run/dpdk/spdk_pid406145 00:36:12.034 Removing: /var/run/dpdk/spdk_pid407148 00:36:12.034 Removing: /var/run/dpdk/spdk_pid407160 00:36:12.034 Removing: /var/run/dpdk/spdk_pid407388 00:36:12.034 Removing: /var/run/dpdk/spdk_pid407615 00:36:12.034 Removing: /var/run/dpdk/spdk_pid407622 00:36:12.034 Removing: /var/run/dpdk/spdk_pid408531 00:36:12.034 Removing: /var/run/dpdk/spdk_pid409300 00:36:12.034 Removing: /var/run/dpdk/spdk_pid410159 00:36:12.034 Removing: /var/run/dpdk/spdk_pid410839 00:36:12.034 Removing: /var/run/dpdk/spdk_pid410841 00:36:12.034 Removing: /var/run/dpdk/spdk_pid411077 00:36:12.034 Removing: /var/run/dpdk/spdk_pid412097 00:36:12.034 Removing: /var/run/dpdk/spdk_pid413091 00:36:12.034 Removing: /var/run/dpdk/spdk_pid421382 00:36:12.034 Removing: /var/run/dpdk/spdk_pid450758 00:36:12.034 Removing: /var/run/dpdk/spdk_pid455383 00:36:12.034 Removing: /var/run/dpdk/spdk_pid457086 00:36:12.034 Removing: /var/run/dpdk/spdk_pid458920 00:36:12.034 Removing: /var/run/dpdk/spdk_pid458947 00:36:12.034 Removing: /var/run/dpdk/spdk_pid459181 00:36:12.034 Removing: /var/run/dpdk/spdk_pid459271 00:36:12.034 Removing: /var/run/dpdk/spdk_pid459746 00:36:12.034 Removing: /var/run/dpdk/spdk_pid461537 00:36:12.034 Removing: /var/run/dpdk/spdk_pid462526 00:36:12.034 Removing: /var/run/dpdk/spdk_pid463022 00:36:12.034 Removing: /var/run/dpdk/spdk_pid465345 00:36:12.034 Removing: /var/run/dpdk/spdk_pid465794 00:36:12.034 Removing: /var/run/dpdk/spdk_pid466340 00:36:12.034 Removing: /var/run/dpdk/spdk_pid470609 00:36:12.034 Removing: /var/run/dpdk/spdk_pid476081 00:36:12.034 Removing: /var/run/dpdk/spdk_pid476084 00:36:12.034 Removing: /var/run/dpdk/spdk_pid476087 00:36:12.034 Removing: /var/run/dpdk/spdk_pid480118 00:36:12.034 Removing: /var/run/dpdk/spdk_pid488849 00:36:12.034 Removing: /var/run/dpdk/spdk_pid492662 00:36:12.034 Removing: /var/run/dpdk/spdk_pid498891 00:36:12.034 Removing: /var/run/dpdk/spdk_pid500115 00:36:12.034 Removing: /var/run/dpdk/spdk_pid501507 00:36:12.034 Removing: /var/run/dpdk/spdk_pid502898 00:36:12.034 Removing: /var/run/dpdk/spdk_pid507537 00:36:12.034 Removing: /var/run/dpdk/spdk_pid511889 00:36:12.034 Removing: /var/run/dpdk/spdk_pid515923 00:36:12.034 Removing: /var/run/dpdk/spdk_pid523524 00:36:12.034 Removing: /var/run/dpdk/spdk_pid523527 00:36:12.034 Removing: /var/run/dpdk/spdk_pid528078 00:36:12.034 Removing: /var/run/dpdk/spdk_pid528251 00:36:12.034 Removing: /var/run/dpdk/spdk_pid528482 00:36:12.034 Removing: /var/run/dpdk/spdk_pid528944 00:36:12.034 Removing: /var/run/dpdk/spdk_pid528952 00:36:12.034 Removing: /var/run/dpdk/spdk_pid533559 00:36:12.034 Removing: /var/run/dpdk/spdk_pid534517 00:36:12.034 Removing: /var/run/dpdk/spdk_pid538849 00:36:12.294 Removing: /var/run/dpdk/spdk_pid541595 00:36:12.294 Removing: /var/run/dpdk/spdk_pid547000 00:36:12.294 Removing: /var/run/dpdk/spdk_pid552326 00:36:12.294 Removing: /var/run/dpdk/spdk_pid561054 00:36:12.294 Removing: /var/run/dpdk/spdk_pid568132 00:36:12.294 Removing: /var/run/dpdk/spdk_pid568187 00:36:12.294 Removing: /var/run/dpdk/spdk_pid587631 00:36:12.294 Removing: /var/run/dpdk/spdk_pid588102 00:36:12.294 Removing: /var/run/dpdk/spdk_pid588744 00:36:12.294 Removing: /var/run/dpdk/spdk_pid589264 00:36:12.294 Removing: /var/run/dpdk/spdk_pid590006 00:36:12.294 Removing: /var/run/dpdk/spdk_pid590481 00:36:12.294 Removing: /var/run/dpdk/spdk_pid591062 00:36:12.294 Removing: /var/run/dpdk/spdk_pid591637 00:36:12.294 Removing: /var/run/dpdk/spdk_pid595697 00:36:12.294 Removing: /var/run/dpdk/spdk_pid596046 00:36:12.294 Removing: /var/run/dpdk/spdk_pid602031 00:36:12.294 Removing: /var/run/dpdk/spdk_pid602267 00:36:12.294 Removing: /var/run/dpdk/spdk_pid607542 00:36:12.294 Removing: /var/run/dpdk/spdk_pid611759 00:36:12.294 Removing: /var/run/dpdk/spdk_pid621706 00:36:12.294 Removing: /var/run/dpdk/spdk_pid622265 00:36:12.294 Removing: /var/run/dpdk/spdk_pid627107 00:36:12.294 Removing: /var/run/dpdk/spdk_pid627414 00:36:12.294 Removing: /var/run/dpdk/spdk_pid631511 00:36:12.294 Removing: /var/run/dpdk/spdk_pid637292 00:36:12.294 Removing: /var/run/dpdk/spdk_pid639880 00:36:12.294 Removing: /var/run/dpdk/spdk_pid649818 00:36:12.294 Removing: /var/run/dpdk/spdk_pid658486 00:36:12.294 Removing: /var/run/dpdk/spdk_pid660297 00:36:12.294 Removing: /var/run/dpdk/spdk_pid661181 00:36:12.294 Removing: /var/run/dpdk/spdk_pid677884 00:36:12.294 Removing: /var/run/dpdk/spdk_pid681690 00:36:12.294 Removing: /var/run/dpdk/spdk_pid684372 00:36:12.294 Removing: /var/run/dpdk/spdk_pid692141 00:36:12.294 Removing: /var/run/dpdk/spdk_pid692213 00:36:12.294 Removing: /var/run/dpdk/spdk_pid697358 00:36:12.294 Removing: /var/run/dpdk/spdk_pid699209 00:36:12.294 Removing: /var/run/dpdk/spdk_pid701074 00:36:12.294 Removing: /var/run/dpdk/spdk_pid702253 00:36:12.294 Removing: /var/run/dpdk/spdk_pid704305 00:36:12.294 Removing: /var/run/dpdk/spdk_pid705367 00:36:12.294 Removing: /var/run/dpdk/spdk_pid714122 00:36:12.294 Removing: /var/run/dpdk/spdk_pid714603 00:36:12.294 Removing: /var/run/dpdk/spdk_pid715256 00:36:12.294 Removing: /var/run/dpdk/spdk_pid717646 00:36:12.294 Removing: /var/run/dpdk/spdk_pid718492 00:36:12.294 Removing: /var/run/dpdk/spdk_pid718985 00:36:12.294 Removing: /var/run/dpdk/spdk_pid722804 00:36:12.294 Removing: /var/run/dpdk/spdk_pid722979 00:36:12.294 Removing: /var/run/dpdk/spdk_pid724558 00:36:12.294 Removing: /var/run/dpdk/spdk_pid725013 00:36:12.294 Removing: /var/run/dpdk/spdk_pid725121 00:36:12.294 Clean 00:36:12.559 12:45:55 -- common/autotest_common.sh@1453 -- # return 0 00:36:12.559 12:45:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:12.559 12:45:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.559 12:45:55 -- common/autotest_common.sh@10 -- # set +x 00:36:12.559 12:45:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:12.559 12:45:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.559 12:45:55 -- common/autotest_common.sh@10 -- # set +x 00:36:12.559 12:45:55 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:12.559 12:45:55 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:12.559 12:45:55 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:12.559 12:45:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:12.559 12:45:55 -- spdk/autotest.sh@398 -- # hostname 00:36:12.559 12:45:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:12.889 geninfo: WARNING: invalid characters removed from testname! 00:36:34.853 12:46:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.234 12:46:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.139 12:46:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.046 12:46:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:41.950 12:46:24 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:43.869 12:46:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:45.774 12:46:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:45.774 12:46:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:45.774 12:46:28 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:45.774 12:46:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:45.774 12:46:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:45.774 12:46:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:45.774 + [[ -n 165037 ]] 00:36:45.774 + sudo kill 165037 00:36:45.784 [Pipeline] } 00:36:45.800 [Pipeline] // stage 00:36:45.804 [Pipeline] } 00:36:45.818 [Pipeline] // timeout 00:36:45.823 [Pipeline] } 00:36:45.837 [Pipeline] // catchError 00:36:45.843 [Pipeline] } 00:36:45.857 [Pipeline] // wrap 00:36:45.863 [Pipeline] } 00:36:45.876 [Pipeline] // catchError 00:36:45.885 [Pipeline] stage 00:36:45.887 [Pipeline] { (Epilogue) 00:36:45.900 [Pipeline] catchError 00:36:45.902 [Pipeline] { 00:36:45.915 [Pipeline] echo 00:36:45.917 Cleanup processes 00:36:45.922 [Pipeline] sh 00:36:46.208 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.208 735549 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.221 [Pipeline] sh 00:36:46.511 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.511 ++ grep -v 'sudo pgrep' 00:36:46.511 ++ awk '{print $1}' 00:36:46.511 + sudo kill -9 00:36:46.511 + true 00:36:46.523 [Pipeline] sh 00:36:46.807 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:59.146 [Pipeline] sh 00:36:59.428 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:59.428 Artifacts sizes are good 00:36:59.443 [Pipeline] archiveArtifacts 00:36:59.450 Archiving artifacts 00:36:59.571 [Pipeline] sh 00:36:59.856 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:59.870 [Pipeline] cleanWs 00:36:59.881 [WS-CLEANUP] Deleting project workspace... 00:36:59.881 [WS-CLEANUP] Deferred wipeout is used... 00:36:59.887 [WS-CLEANUP] done 00:36:59.889 [Pipeline] } 00:36:59.906 [Pipeline] // catchError 00:36:59.918 [Pipeline] sh 00:37:00.200 + logger -p user.info -t JENKINS-CI 00:37:00.209 [Pipeline] } 00:37:00.222 [Pipeline] // stage 00:37:00.227 [Pipeline] } 00:37:00.240 [Pipeline] // node 00:37:00.244 [Pipeline] End of Pipeline 00:37:00.283 Finished: SUCCESS